Font Size: a A A

Research And Implementation Of Performance Optimization Of Fast Search Motion Estimation Algorithm

Posted on:2020-07-29Degree:MasterType:Thesis
Country:ChinaCandidate:Y WangFull Text:PDF
GTID:2438330575459502Subject:Engineering
Abstract/Summary:PDF Full Text Request
In recent years,with the rapid development of Internet information technology,people's demand for video information has increased rapidly,so the requirements for video quality are getting higher and higher,and it is especially important to improve video quality.However,the amount of video data without compression is huge,which makes it difficult to store and transmit.Video compression is a key video application for eliminating redundant data and reducing video storage and transmission overhead.Improving the quality and efficiency of video compression has become a hot issue for researchers and the industry.Inter-frame prediction can be used in video compression to remove time redundancy in frame sequences,and motion estimation is a core technology for inter-frame prediction.However,motion estimation algorithms are time-consuming,spending about 70% of the coding time,and for high-definition video,the ratio is even higher.Therefore,reducing the time cost of motion estimation becomes the key to the acceleration of the entire video compression process.The motion estimation algorithm is classified into full search algorithm and fast search algorithm.Although the fast search algorithm is superior to the full search algorithm in speed,the search time is still long due to the irregularity of the data access of the fast search motion estimation algorithm.Data reuse is an effective means to improve the performance of motion estimation algorithms.Domestic and foreign scholars have applied this method to do a lot of related research.However,there are currently few studies on data reuse methods for fast search search motion estimation.In this paper,considering the performance optimization method of fast search motion estimation algorithm and its implementation as the research objectives and taking a typical fast search motion estimation algorithm TZSearch as object,research work is carried out from two aspects: data reuse between different search steps of the same current block and data reuse between search areas of two adjacent current blocks.The experimental results show that the data reuse method proposed in this paper can effectively improve the efficiency of the algorithm.The innovations and contributions of this paper include the following aspects:(1)A method of data reuse between different search steps of the same current block is proposed.Combined with the search step of the TZSearch algorithm,the algorithm first performs a diamond search,and then performs a raster search centered on the search result of the above step,and finally performs a cyclic diamond search.According to the search process of the algorithm,this paper proposes a data reuse method between different search steps of the same current block,which can fully utilize the data of the data reuse area in the search process,thereby improving the data processing efficiency and reducing the running time of the algorithm.This paper first analyzes the TZSearch motion estimation algorithm and its data access characteristics.The algorithm then analyzes the range of data reuse between different search steps.Finally,the reusable data is saved in on-chip memory for use in subsequent search steps.Reusable data can be read directly from the on-chip memory in subsequent search steps,reducing the number of accesses to off-chip memory and speeding up data access.The experimental results show that the data reuse method provided in this paper significantly improves the running speed of theTZSearch algorithm search,and reduces the time by up to 62.02%.(2)A method of data reuse between search areas of two adjacent current blocks is proposed.Combined with the search process of TZSearch algorithm,this paper proposes a data reuse method between two adjacent current block search areas.The method utilizes the data of the data reuse area in the search process reasonably on the chip,reduces the number of times the algorithm accesses the off-chip memory,reduces the running time of the algorithm,and further improves the data processing efficiency.Taking TZSearch algorithm as an example,this paper first analyses the reusability of the data in the search area corresponding to two adjacent current search blocks in the search process,and finds that the data in the search area will overlap when two adjacent current blocks are searched.Then,the overlap area data is pre-stored in on-chip memory for subsequent search steps.In the process of running the algorithm,the data in the overlapping area of video image can be read directly in the on-chip memory.Only the off-chip memory needs to be accessed to search the non-overlapping area of video image,thus reducing the number of accesses to the off-chip memory and speeding up the speed of data access.The experimental results show that the data reuse method provided in this paper significantly improves the running speed of the TZSearch algorithm,and reduces the time by up to 54.79%.(3)Implementation of the optimization method.Based on the CUDA architecture,the motion estimation algorithm is optimized based on the CUDA architecture.The data mining algorithm is used to analyze and optimize the fast motion estimation algorithm.Experiments are carried out on the proposed optimization method,and compared with the related research,whether the proposed optimization method can obtain the optimal experimental results,and finally the experimental results are visually displayed.
Keywords/Search Tags:Motion Estimation, TZSearch, Algorithm Optimization, Video Compression, GPU
PDF Full Text Request
Related items