Font Size: a A A

Visual Saliency Regional Object Detection Of Traffic Scenes Based On Deep Learning

Posted on:2021-03-05Degree:MasterType:Thesis
Country:ChinaCandidate:L QinFull Text:PDF
GTID:2392330623467928Subject:Biomedical engineering
Abstract/Summary:PDF Full Text Request
The traffic driving scene is a complex environment with three-dimensional diversity,transient changes,and integration of motion and quietness.The scene contains not only static stimuli(or objects),such as cars parked on the roadside,but also dynamic stimuli(or objects),such as driving cars or pedestrians.Affected by the visual selective attention mechanism,experienced drivers will selectively focus on the significant areas of the traffic scene that are closely related to driving safety or driving purpose and important objects in the significant areas,while automatically ignoring most of the unrelated to driving safety.Scene information or objects to achieve safe driving.By simulating the driver's visual selective attention mechanism in traffic scenes and carrying out object detection research in the significant area of the traffic scene,it can provide a fast and safe object detection driving strategy for future autonomous driving(or unmanned)cars,greatly reducing calculated energy consumption of automatic vehicles.Based on the existing driver eye movement dataset and saliency area calculation model in this laboratory,this paper establishes a new saliency area object detection dataset.The visual attention mechanism is integrated into the existing object detection model,and a new traffic scene saliency area object detection model ID-YOLO(Increase-Decrease Based You Only Look Once)is proposed.Experimental test results show that ID-YOLO can be accurate Quickly detect important objects in prominent areas in traffic driving environment.The main content of the paper is as follows:First,the paper introduces the construction of saliency area object detection dataset based on the experienced driver's eye movement mechanism.Then,the paper adopts the currently commonly used traffic scene saliency area object detection model based on Faster R-CNN and YOLOv3 to process and analyze the salient object detection(Salient Object Detection,SOD)dataset constructed in this paper.It is found that these two basic methods perform poorly in the detection of significant objects in traffic scenes.Basic Faster R-CNN has the problem that the detection speed cannot reach real-time and is easy to check multiple times,while the basic YOLOv3 has the problems of missed detection and false detection.In view of the shortcomings of the basic algorithm,the next paper proposes a saliency area object detection network model ID-YOLO based on improved YOLOv3.In terms of speed,ID-YOLO streamlines the feature extraction network of YOLOv3,and improves the detection speed of the model a lot without reducing the effect of feature extraction.In terms of detection accuracy,inspired by the visual selective attention mechanism,ID-YOLO uses lower-level features to learn the location information of the object,and correspondingly adds two scales to detect the object.This operation makes the object 's bounding box more accurate.The detection of small objects has also been improved.Finally,the paper evaluates the detection performance of ID-YOLO from both qualitative and quantitative perspectives.The experimental results show that the clustering algorithm used in this paper obtains priori frame,reducing the feature extraction network and using low-level features to increase multi-scale prediction have correspondingly improved the model performance.ID-YOLO achieves a detection accuracy of 79.52% on the SOD dataset,which is about 8% improvement over Faster R-CNN and about 3% improvement over YOLOv3.
Keywords/Search Tags:Traffic driving scene, object detection, visual attention, eye movement, YOLOv3
PDF Full Text Request
Related items