Font Size: a A A

Optimization Research And Implementation Of Full Search Motion Estimation Algorithm Based On GPU Platform

Posted on:2020-01-09Degree:MasterType:Thesis
Country:ChinaCandidate:Y Y GuoFull Text:PDF
GTID:2438330575959496Subject:Engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of video applications,video compression technology has attracted more and more people's attention,and it has important significance in the fields of video storage,video transmission,network monitoring and network video.Video compression can effectively reduce the repeated data information in the video sequence,which is beneficial to reduce the amount of video data stored and transmitted.Inter-frame prediction techniques can be used to find the same data between adjacent frames,where the motion estimation algorithm is one of the core algorithms in inter-frame prediction technology and video compression technology.The motion estimation algorithm refers to the process of finding its reference block in a reference frame for the current block in the current frame.In this process,a large number of block matching needs to be performed on the current block,so the block matching process occupies most of the time of the entire algorithm.Implementing the motion estimation algorithm on the GPU platform can effectively speed up the algorithm and reduce the running time of the algorithm.At present,some research work has accelerated the motion estimation algorithm based on GPU-based on-chip memory using data reuse method.Based on the analysis of the research status at home and abroad,the problems are summarized as follows:(1)The current GPU-based motion estimation data reuse method mainly performs data reuse between adjacent search windows on the shared memory of the GPU.On the one hand,the existing research work does not consider other data reuse methods.When the reusable data between adjacent search windows is larger than the GPU shared memory,the reuse method cannot be adopted.On the other hand,other memories(such as registers)that utilize the GPU are not considered for data reuse.(2)In the data reuse research of GPU-based motion estimation algorithm,the multi-level storage architecture of GPU is not fully considered combined with multiple data reuse methods,so that the GPU's on-chip storage resources cannot be fully utilized to maximize data.Reuse,it is not better to accelerate the motion estimation algorithm.In view of the above deficiencies,this paper carries out in-depth research,the main research contents and innovations are as follows:(1)A method for reusing full search motion estimation data based on GPU multi-type on-chip memory is proposed.Combining multiple data reuse methods with GPU's multi-type on-chip memory can be used to select the fastest data reuse method for on-chip storage size and,on the other hand,to select the best on-chip memory for a data reuse method.Three kinds of on-chip memories based on GPU implement four data reuse methods,and compare them through experiments.(2)According to the characteristics of GPU storage architecture,a data reuse method based on GPU multi-level storage architecture is proposed.Different levels of memory reuse are used for different levels of memory in the GPU,such as combining data reuse between reference blocks in registers with data reuse between reference block strips in shared memory.The method makes full use of the GPU on-chip storage resources and improves the running speed of the algorithm.This article describes three different combinations and compares them experimentally.
Keywords/Search Tags:Video Compression, Motion Estimation, Data Reuse, CUDA
PDF Full Text Request
Related items