Font Size: a A A

The Study, Based On The Frequency And Time Domain Segmentation Of Video Object Extraction Method

Posted on:2004-09-02Degree:MasterType:Thesis
Country:ChinaCandidate:W YangFull Text:PDF
GTID:2208360095460462Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
Visual objects (VOs) abstraction is the basic step for all kinds of operation, such as index, accessing, which are based on VOs. This paper brings forward an automatic and efficient method of abstracting VOs. Information of both contour of object based on spacial segmentation and motion vector based on temporal segmentation is integrated to get the final VOs. We have introduced and discussed 3D object motion modal, wavelet transform on graphic, optical flow field, and etc.In chapter 1, we first introduce the background of this thesis and the future direction of multi-media development in order to demonstrate the importance of VO abstraction. Then we present current methods in the world in brief, and point out the merits and demerits of every method. At last we put forward an efficient method that is fitted for that work.In chapter 2, we define the modal of VOs and confine the available applied field. Then we introduce in brief the whole frame and the algorithm that integrates the information of both temporal and spacial segmentation.In chapter 3, we expatiate upon the algorithm that abstracts information of VOs based on spacial segmentation. First we introduce theory and merits of graphic wavelet transform, then Mallat algorithm, multi-scale characteristic, quadratic B-alpine wavelet and the coefficients of this filters, and etc. Later we calculate the gradient matrix based on the result of wavelet transform, thin the contour and get spatical information. At the end of this chapter, we compare it with other method, such as canny filter.In chapter 4, we discuss the method of VOs abstraction based on temporal segmentation in detail. First we put forward affine modal, which is a kind of 3-D motion modal of rigid body, compensate global motion vector based on this modal, and get the changed detection mask (CDM). Then we introduce the conception of optical flow field, compute the localmotion vector with Horn-Schunck method, and abstract the essential information in temporal field. In chapter 5, we integrate the information got in chapter 3 & 4 with a criterion of adjacent comparability, then use last operations including seed growing, morphological filters, and etc, to get the final VOs from video sequence. We compare this algorithm of integration with other related algorithms in end of this chapter.In chapter 6, we test this whole algorithm with four sets of standard video sequences of MPEG-4, and comment on the result.In the last chapter, chapter 7, we summarize this paper and bring out the direction of improvement.
Keywords/Search Tags:Visual object (VO), quadratic B-alpine wavelet, affine modal, global motion compensation, optical flow field, adjacent comparability
PDF Full Text Request
Related items