Humans have the capability of directing the attention to regions of interest(ROI) rapidly and exploiting the limited resources to process the ROI in complex scenes. Saliency detection, which is a branch of image processing in computer vision, aims to identify the salient object by simulating human visual system and generate a saliency map where the intensity of the pixel indicates the likelihood of it belonging to the salient object.In this paper, we propose a visual saliency detection algorithm which incorporates both generative and discriminative saliency models into a unified framework. First, we utilize the algorithm of searching image patches to collect abundant background superpixels. The background dictionary could be learnt from those background superpixels. Then we develop a generative model by defining image saliency as the sparse coding residual based on a learnt background dictionary. Second, we introduce a discriminative model by solving an optimization problem that exploits the intrinsic relevance of similar regions for regressing region-based saliency to the smooth state. Considering the objects of different size, we exploit the algorithm of multi-scale integration, which generates a more continuous and smooth result. Furthermore, object location is also utilized to suppress background noise for the object-level map, which acts as a vital prior for saliency detection. The final saliency map could be obtained by combing the object-level map with the multi-scale integrated saliency map.The proposed model is evaluated and compared with twenty two state-of-the-art methods on four publicly standard salient object detection databases. Experimental results show that the proposed method comfortably outperforms other state-of-the-art methods in terms of PR curve, F-measure and visual quality. |