Font Size: a A A

Study On Multi-thread Parallel Programming Method Based On Multi-core Environment

Posted on:2015-10-22Degree:MasterType:Thesis
Country:ChinaCandidate:H WangFull Text:PDF
GTID:2298330467967160Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Over the past decade, as the requirements of parallel computing in many kinds offields grew rapidly and Hardwares were restricted seriously by Moore’s law, multi-coreCPUs were applied to a growing number of parallel computers, and as a result parallelprogramming technology based on multicore processors had become a inevitabledevelopment trend. Now the past GPU for graphic processing is no longer dedicated,because of its large number of parallel computing units, which makes it has outstandingadvantages in dealing with parallel computing applications, also attracts extensiveattentions of computer scientists because of their powerful parallel computing ability,which make them become an unwritten synonym of heavy weight compute engine.After undergoing the technology reformation of single-core and multi-core,Heterogeneous computing is becoming an another break-through technology of breakingthe bottleneck of parallel computers performance, it can connect the computing units inthe same architecture with others perfectly, and make them complete programcomputation together. Such as "collaborative computing and accelerating each other"between CPU and GPU.CUDA and OpenCL are both GPGPU heterogeneous computingbased on the mode of CPU+GPU. CUDA is a computation framework for generalpurpose parallel computing, which is brought by NVIDIA. OpenCL is a specialframework for programming on heterogeneous platform which consists of CPU, GPU orother architecture types of processors.This paper introduces the parallel computer architecture at first, narrates andanalyses the modles of structure, access memory and design about parallel computer.Secondly, comparing the parallel programming models on distributed memory with it on shared memory. Then through setting up the MPI cluster on Linux by myself, puttingemphasis on achieving the MPI+OpenMP hybrid programming on Linux systemplatform, which proves that hybrid programming on the message passing between nodesand the shared memory in nodes has better speedup comparing to a singleprogramming model. Finally,to use the OpenCL to achieve the implementation of matrixmultiplication, and achieve heterogeneous programming experiment.
Keywords/Search Tags:parallel computing, heterogeneous computing, hybrid programming, CPU+GPU, GPGPU, CUDA, OpenCL
PDF Full Text Request
Related items