Font Size: a A A

Incentive-centered design for scheduling in parallel and distributed systems

Posted on:2010-02-09Degree:Ph.DType:Dissertation
University:Wayne State UniversityCandidate:Carroll, Thomas EFull Text:PDF
GTID:1448390002988806Subject:Computer Science
Abstract/Summary:
Distributed systems comprise components that are owned and operated by autonomous organizations. These organizations are rational (self-interested and welfare-maximizing) and they have no a prior motivation for cooperation. Rationality leads them to manipulate the systems if it benefits them to do so. Organizations respond to incentives. To prevent manipulations, incentives must be explicitly aligned with the system's objectives. Incentive-centered design (ICD), which integrates the motivational human behavior aspect of social sciences with the computational tractability aspect of computer science, specifically addresses these concerns and provides methods to mitigate them.;We examine the incentives present in several distributed system resource allocation problems. We model these problems as games and then used ICD to devise protocols that obtained their desired objectives in a self-interested, competitive setting. The first problems we examine are designing incentive-compatible mechanisms for scheduling divisible loads in distributed systems. Divisible loads is a form of data parallelism in which large data sets are partitioned into fractions. All fractions require an identical type of processing. Scheduling requires knowledge of the processors. Since the processors are rational, they may choose to misreport their capacities, which adversely affects the quality of scheduling. We design several mechanisms that obtain optimal schedules in this setting. Each mechanism is specific to a network architecture as the architecture affects partitioning and the optimality conditions. We examine three architectures: bus, linear, and tree networks. For the bus network, we design three classical centralized mechanisms. The centralized mechanisms are not feasible for the tree and linear networks as the processors cannot communicate directly with the mechanism center, a limitation of these architectures. Instead, we design distributed mechanisms in which the information and algorithms are distributed throughout the network. These mechanisms introduce new avenues for manipulation and we address this by creating incentives for processors to monitor one another. We then examine divisible load scheduling using coalitional games. An application owner receives a payoff when her job is finished. The amount she receives though decreases with time and furthermore, the payoff must be shared with the processors computing her job. She forms a coalition with a set of processors to maximize her profit, which is the difference between payoff and costs. We model this problem as a coalitional game and we show that stable coalitions between application users and processors occur.;We next examine parallel job scheduling in parallel systems comprising identical processors. A parallel job differs from a sequential job or task in that it can execute on two or more processors. We consider two different types of parallel job scheduling. The first type we examine is the batch scheduling of rigid jobs. Batch scheduling requires that all jobs are present at the start of scheduling. A parallel job is rigid if it requires a fixed number of processors. If fewer processors are assigned, the job fails. If more processors are assigned, the job does not speedup. Associated with each job is a deadline and payoff. The job's owner receives the payoff if her job completes by its deadline; otherwise, her payoff is zero. The scheduling objective then is to maximize the sum of payoffs. The users are rational: they will misreport their jobs' parameters if it benefits them to do so. We design an incentive-compatible scheduling mechanism aimed specifically for this problem. Because the mechanism is incentive compatible, rational users choose to truthfully declare their parameters to the scheduler. The second type of parallel job scheduling we consider is the online scheduling of malleable jobs. Online scheduling differs from the batch scheduling in that the scheduler cannot foresee jobs arriving in advance. The scheduling objective must be optimized with the lack of information. Unlike rigid jobs, a malleable jobs can be adapted to execute on any number of processors. Each job is associated with an arrival time (release time), deadline, and payoff. Similar to the rigid job problem, if the job completes by its deadline the self-interested user receives the payoff, otherwise, she receives zero. Besides the online aspect, this problem differs from the rigid one in that the scheduler must choose the number of processors to assign to the job. For this problem we design an online, incentive-compatible mechanism. If a job is scheduled to execute, the mechanism assigns it the number of processors that minimizes its execution time.
Keywords/Search Tags:Scheduling, Job, Processors, Distributed, Systems, Parallel, Mechanism, Time
Related items