Font Size: a A A

Predictable Task Migration Support and Static Task Partitioning for Scalable Multicore Real-Time Systems

Posted on:2013-01-19Degree:Ph.DType:Dissertation
University:North Carolina State UniversityCandidate:Sarkar, AbhikFull Text:PDF
GTID:1458390008464850Subject:Computer Science
Abstract/Summary:
Multicores are becoming ubiquitous, not only in general-purpose but also embedded computing. This trend is a reflection of contemporary embedded applications posing steadily increasing demands in processing power. On such platforms, prediction of timing behavior to ensure that deadlines of real-time tasks can be met is becoming increasingly difficult. While real-time global/semi-partitioned multicore scheduling approaches help to assure deadlines based on firm theoretical properties, their reliance on task migration poses a significant challenge to timing predictability in practice. Task migration actually (a) reduces timing predictability for contemporary multicores due to cache warm-up overheads, (b) renders locking of cache lines infeasible for multicore real-time systems and (c) increases traffic on the network-on-chip (NoC) interconnect. Additionally, prior work in static task partitioning on multicore architectures focuses on shared cache organization with a fixed number of cores. Such schemes are not suitable for static partitioning on scalable multicore architectures that feature private cache organization. We attempt to address these limitations in this dissertation.;The following are the key contributions of this work:;1. First, a task migration into two cores imposes cache warm-up overheads on the migration target, which can lead to missed deadlines for tight real-time schedules. We propose a novel push-assisted cache migration model to pro-actively migrate cache lines through novel software and micro-architectural support. Our mechanism imposes cache migration delays at a fraction of the task's execution time. This delay can be steered to fill idle slots in the schedule, i.e., it does not contribute to the execution time of the migrated task. We also propose micro-architectural modifications that further reduce the delay and bus traffic.;2. Second, locked cache lines that are predominantly used in real-time systems are immobile during task migration. We address this issue by extending the push-assisted migration model with several cache migration techniques to efficiently retain locked cache lines on a bus-based chip multi-processor architecture. We also provide deterministic migration delay bounds that help schedulers to decide which migration technique(s) to utilize while migrating a single or multiple tasks. This information also allows the scheduler to determine feasibility of task migrations, which is critical for the safety of any hard real-time system. Such proactive migration of locked cache lines in multicores is unprecedented to our knowledge.;3. Third, we further the use of locked caches on scalable multicore architectures by analyzing its impact on static task partitioning algorithms. In shared cache architectures, a single resource is shared among all the tasks. However, in scalable cache architectures with private caches, conflicts exist only among the tasks scheduled on one core. This calls for a cache-aware allocation of tasks onto cores. Here, we propose a novel variant of the cache-unaware First Fit Decreasing (FFD) algorithm called the Naive locked First Fit Decreasing (NFFD) policy. We propose two cache-aware static scheduling schemes: (1) Greedy First Fit Decreasing (GFFD) and (2) Colored First Fit Decreasing (CoFFD) for task sets where tasks do not have intra-task conflicts among locked regions (Scenario A). NFFD is capable of scheduling high utilization task sets that FFD cannot schedule. CoFFD consistently outperforms GFFD requiring a lower number of cores and lower system utilization. For a more generic case where tasks have intra-task conflicts, we split the task partitioning into two phases: Task Selection and Task Allocation (Scenario B). Instead of resolving conflicts at a global level, these algorithms resolve conflicts among regions while allocating a task onto a core and perform unlocking at region-level instead of task-level. We show that a combination of our novel Dynamic Ordering (Task Selection) with Chaitin's Coloring (Task Allocation) scheme reduces the number of cores required considerably over a basic scheme (combination of Monotone Ordering and Regional FFD).;Overall, this dissertation suggests that deployment of locked caches and hardware support for cache migration on scalable multi-processors can enable more predictable and efficient multiprocessor scheduling.
Keywords/Search Tags:Migration, Task, Multicore, Scalable, Cache, Real-time, Support, Locked
Related items