| In this paper, we propose a novel visual saliency detection method from the perspective of reconstruction error. The image boundaries are extracted as background templates according to the assumption that salient objects hardly touch image boundaries. Different from the tradition-al method in which each image patch is represented by a dictionary or basis functions learned from a set of natural image patches rather than the remaining other patches of its corresponding image, we construct a dictionary based on the background templates for each individual image to use its most relevant visual information. Based on the background dictionary, we propose a reconstruction error based saliency consisting of three basic steps. First, we employ dense and sparse reconstruction on the dictionary to obtain the reconstruction errors for each image region. Second, a context-based propagation mechanism is designed by propagating the reconstruction errors in each K-means cluster. Third, pixel-level saliency is computed by an integration of multi-scale reconstruction errors and a refinement of an object-biased Gaussian model. In ad-dition, we propose a novel saliency integration mechanism based on Bayes formula to combine the dense and sparse reconstruction error based saliency.The proposed model is evaluated and compared with seventeen state-of-the-art methods on three public standard salient object detection databases. Experimental results show that the proposed method comfortably outperforms the state-of-the-art methods in terms of precision, recall and F-measure. Our saliency maps are more robust to background noises with uniformly highlighted objects. |