Font Size: a A A

Research On Fire Detection Method Based On Deep Learning

Posted on:2024-01-31Degree:MasterType:Thesis
Country:ChinaCandidate:Z Y QuanFull Text:PDF
GTID:2531307118465864Subject:Master of Electronic Information (Professional Degree)
Abstract/Summary:PDF Full Text Request
Fires are extremely serious public safety incidents that can cause massive damage in a short amount of time.In recent years,frequent fire accidents in China have posed a serious threat to the lives and property of the people.Therefore,a quick and accurate fire detection method has always been an important research direction.Traditional fire detection methods are mainly based on fire detectors but suffer from high false alarm rates,limitations,and long detection delays,unable to meet the requirements of modern fire detection.With the continuous development of computer vision-based object detection technology,many deep learning-based object detection models have shown excellent performance in accuracy,which are suitable for fire detection.However,these models usually require a large amount of computing resources to complete the detection task in an acceptable time,and cannot meet the requirements of detection on mobile or embedded devices.This paper studies the method of fire detection based on deep learning,and the specific work is as follows:(1)A city fire dataset containing 4200 flame images was constructed through manual collection,cleaning,and annotation.Based on YOLOv4,an improved K-means++ clustering prior box method was adopted to obtain preset anchor boxes that better fit the self-built dataset and improved AP by 0.5% by enabling the model to learn object features more effectively.(2)A lightweight network improved YOLOv4-based fire detection method was proposed that aims to improve the real-time performance of fire detection from the perspective of network lightening.To address the problem of the complex structure and redundant parameters of the YOLOv4 network,lightweight networks MobileNet series and GhostNet were introduced to replace the original YOLOv4 backbone feature extraction network,and the remaining 3 × 3standard convolutions in the network were replaced with depth-separable convolutions.Finally,four improved lightweight models were trained and tested.The results of the tests show that the MobileNetV2-YOLOv4 performs better in all aspects.Compared with the YOLOv4,which improved the anchor box clustering method,the model parameters and size were reduced by 6times,and the detection speed was increased by 56.1%,with a sacrifice of 1.21% accuracy.(3)Although MobileNet V2-YOLOv4 successfully compressed the model size and parameter quantity,there was a certain loss in accuracy.To solve this problem,this paper introduces an adaptive spatial feature fusion method to improve the feature fusion network of YOLOv4.This method can enhance the expression ability of PANet in different spatial scales and channel dimensions,making full use of multi-scale features and further improving the performance of object detection.The improved MobileNet V2-YOLOv4 achieves a good balance between speed and accuracy,and introduces a small computational cost,resulting in a 1.45% increase in AP.At the same time,the effectiveness of the optimization strategy adopted in this paper was verified through ablation experiments.Finally,a fire detection system with a visual interface was developed using third-party libraries such as PyQt5 and OpenCV.The system supports local image and video detection,as well as real-time detection using external image input devices.The system is easy to use and has good real-time performance.
Keywords/Search Tags:fire detection, deep learning, YOLOv4, lightweight network, adaptive spatial feature fusion
PDF Full Text Request
Related items