Font Size: a A A

Research On Detection Of Salient Regions In Video

Posted on:2016-03-18Degree:MasterType:Thesis
Country:ChinaCandidate:S TianFull Text:PDF
GTID:2348330488974638Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
The purpose of visual attention region detection is to construct a reasonable model of computer vision, simulate the human visual attention mechanism, quickly and efficiently detect the regions of the image in the technology, it has played a very important role in medical equipment, military reconnaissance, intelligent robots and other fields. Excellent visual saliency model is able to provide a better detection accuracy for the visual attention region detection and has high robustness for different video scenes. A large number of neuropsychological studies show that there are two different directions to construct regional detection models of visual attention: the bottom-up detection method of no priori information, independent of the specific tasks and the top-down detection method based on the priori information, for a specific target.In this paper, we focused on the bottom-up visual saliency detection model, compared the shortcomings of the existing video detection model, and particularly analyzed the shortcomings of the existing the detection models of video saliency and then proposed a saliency detection algorithm based on the sequence of dynamic vectorgraph corresponding to video image frame and a saliency detection model based on a novel fusion strategy of video frame static saliency information and dynamic saliency information. The research results of this paper and the main work are as follows:1.For a video with a strong randomness or strong motion scene, because of the dissimilarity of the change of pixels between each video frame, the accuracy of the detection results of the salient region detection algorithm based on the time domain optical flow is poor. In this paper, we analyzed the distribution of the pixel gray value of motion vector sequence corresponding to video frame. Preserving the pixel gray value correlation with saliency and then the estimation error caused by the irrelevant background motion is eliminated, which can effectively improve the accuracy of the video saliency detection.2.For the video that the intensity of the movement which can cause human’s visual attention is much lower than other background motion which is not related to the visual attention, because the saliency area of the video mainly located by the static saliency information, the effect of detection algorithm merely based on time domain is less obvious, and the existing time-spatial domain saliency detection models have a large difference on the detection results of different video scenes. In view of this situation, this paper presents a method of dynamic weighting fusion based on pixels difference, and take into account the contribution of the static and dynamic saliency of video frame to the final saliency, then the difference of pixels gray value and the maximum gray value are combined to calculate the weights, effectively reducing the influence of the computation error caused by the static/dynamic saliency map, and improving the robustness of the detection algorithm for video sequences with different scenes.3.In this paper, we use the USCD Background Subtraction Dataset as the evaluation criterion of the algorithm, which contains 18 image sequences of moving objects in different scenes, and the corresponding ground-truth reference data. By applying the existing algorithms and the improved algorithm proposed in this paper on the test image sequences, and comparing the detection results with each other as well as the ground-truth benchmark data, the effectiveness and robustness of the proposed algorithm are demonstrated.
Keywords/Search Tags:computer vision, saliency detection, optical flow, time domain filtering, image fusion
PDF Full Text Request
Related items