| Human fixations prediction(or saliency detection)in traffic scenes is an important way to help to understand the visual cognition processing of humans and facilitate the design of the Advanced Driver Assistance System(ADAS).Compared to daytime traffic scenes,visual saliency detection is more challenging in nighttime because of the complex illumination conditions.Even for deep learning methods that perform well in daytime traf-fic scenes,it is also a huge challenge that keeping robustness under different illumination conditions,such as,transfering day to night.Efficient attention allocation seems effortless for the human vision system(HVS),but so difficult for the computer.For this issue,some researches proved that scene guidance paly an important role in the processing of attention allocation in HVS.That may be the reason why those bottom-up or texture-based methods cannot deal with the complex but geometry simple night traffic scenes.In this paper,we mainly focus on layout-guided saliency detection for nighttime traffic scenes.We built the spatial layout of traffic scenes by referring to the stable semantic information,e.g.,vanishing point.Then,the geometric context-related prior distribution of visual attention is learned from human fixations collected with an eye-tracking recorder.Finally,robust and efficient saliency detection is achieved by combining the bottom-up features and the layout-guided prior.Based on the sufficient findings of the guidance mechanism of scene structure in-formation to visual attention,visual attention path,this paper propose two computational models for saliency detection in nighttime traffic scenes.The contributions of this paper mainly include two parts:(1)Based on the relationship between attention transfer paths and fixations distribu-tions,we build a model to enhance saliency prediction performance.Firstly,we collected human fixations on nighttime traffic scenes in free viewing and build a nighttime traffic image dataset with fixations.Then,the stratagem of fixations transferring and the relation-ship to attention distribution during the scenes analyzing processing of HVS,have been analyzed.Finally,we employ the stratagem of attention transferring and build a model that improved the performance of saliency detection in night traffic scenes and natural scenes.(2)Based on the mechanisms in vision-guided motion and the relationship between traffic scene context and fixations distributions and saliency distributions,a layout-guided vision saliency prediction model is proposed.Firstly,in this paper,we analyze the contact between saliency distribution and scene struct in traffic scenes by statistically analyzed the free vision eye-tracking dataset.Then,based on the edges extracted by a simple method,the structures of traffic scenes based on vanishing points and road edges are accurately constructed.Finally,the geometric context-related prior distribution of visual attention is learned from collected human fixations,and robust saliency detection is achieved by com-bining the bottom-up features and the layout-guided prior.The results demonstrate that facing complex conditions of nighttime traffic,e.g.,day and night transfer,the context-based method this paper proposed significantly outperforms the classical bottom-up meth-ods and achieves comparable but more robust performance with deep learning-based methods. |