Digital Image Processing is the core part of machine vision. In recent years, General-Purpose computation on GPU has become the hot topic of study by scholars at home and abroad. It has great significance to construct real-time, accurate and efficient machine vision system.With the current increased demand for practical applications, accurate and real-time requirements of machine vision system are increasing. The amount of data system needs to be addressed are more, the quantity of calculation it involved is big.so the currently, It causes that computing performance of recent PC can not meet the performance requirements of practical application. So how to further improve the calculating efficiency of image processing algorithms in the machine vision system, and how to make it meet the real-time requirement are the problems needed to be addressed in the field recently. In recent years, computer graphics processor (GPU) has been greatly developed. Usually accelerating calculating by using GPU can get a quantatively improvement of speed. This is also one of today's hot research.Therefore,powerful computing and parallel processing power of GPU can break the bottleneck of processing speed in the machine vision system, and improves the efficiency of the algorithm in the execution.At first ,the paper introduced the development process,working principle,features and trend of GPU's general computing. Furthermore, the paper focuses on the platform of CUDA which can fully play a powerful computing and parallel processing capabilities, including an overview of CUDA,the requirements for hardware and software environments,programming model,features, application fields and the trend of developments. And an example the paper presents shows the powerful computing and parallel processing capabilities of GPU.On the basis of content we mention before,the paper have a further research on how to use CUDA to program the algorithms in Digital Image Processing. The algorithms mainly included algorithms of Image Processing in the frequency domain and time domain. Specifically describled as follows:(1) With GPU,in the time domain,to accelerated algorithms of image process,including Equation of image with Histogram,image smoothing and image sharpening. At last,we realize the algorithms with CUDA.(2) With GPU,in the frequency domain, to accelerated computing the convolution of signal with filter. At last,we realize the algorithms with CUDA.(3) This paper designs a new class of GPU-based corner detection algorithms, and to program it with CUDA.Finally, the paper summarizes the method of optimization program using CUDA. |