Font Size: a A A

Parallel discrete event simulation of queuing networks using GPU-based hardware acceleration

Posted on:2010-08-05Degree:Ph.DType:Dissertation
University:University of FloridaCandidate:Park, HyungwookFull Text:PDF
GTID:1448390002977787Subject:Engineering
Abstract/Summary:
Queuing networks are used widely in computer simulation studies. Examples of queuing networks can be found in areas such as the supply chains, manufacturing work flow, and internet routing. If the networks are fairly small in size and complexity, it is possible to create discrete event simulations of the networks without incurring significant delays in analyzing the system. However, as the networks grow in size, such analysis can be time consuming and thus require more expensive parallel processing computers or clusters.;The trend in computing architectures has been toward multicore central processing units (CPUs) and graphics processing units (GPUs). A GPU is the fairly inexpensive hardware, and found in most recent computing platforms, but practical example of single instruction, multiple data (SIMD) architectures. The majority of studies using the GPU within the graphics and simulation communities have focused on the use of the GPU for models that are traditionally simulated using regular time increments, whether these increments are accomplished through the addition of a time delta (i.e., numerical integration) or event scheduling using the delta (i.e., discrete event approximations of continuous-time systems). These types of models have the property of being decomposable over a variable or parameter space. In prior studies, discrete event simulation, such as a queuing network simulation, has been characterized as being an inefficient application for the GPU primarily due to the inherent synchronicity of the GPU organization and an apparent mismatch between the classic event scheduling cycle and the GPUs basic functionality. However, we have found that irregular time advances of the sort common in discrete event models can be successfully mapped to a GPU, thus making it possible to execute discrete event systems on an inexpensive personal computer platform.;This dissertation introduces a set of tools that allows the analyst to simulate queuing networks in parallel using a GPU. We then present an analysis of a GPU-based algorithm, describing benefits and issues with the GPU approach. The algorithm clusters events, achieving speedup at the expense of an approximation error which grows as the cluster size increases. We were able to achieve 10-x speedup using our approach with a small error in the output statistics of the general network topology. This error can be mitigated, based on error analysis trends, obtaining reasonably accurate output statistics. (Full text of this dissertation may be available via the University of Florida Libraries web site. Please check http://www.uflib.ufl.edu/etd.html)...
Keywords/Search Tags:GPU, Networks, Discrete event, Simulation, Using, Parallel
Related items