| With the expansion of the city scale,video data feature analysis for public safety has become an important issue.Video surveillance systems increasingly play a huge role in public safety.In the video surveillance system,the research of video data characteristics is the key supporting technology of the current monitoring system in the area of public safety.This paper focuses on two aspects of video data analytics,namely,static characteristic-based video analysis and dynamic characteristic-based video analysis.Static-characteristics video data analysis mainly aims to detect the quality of image or video from a camera.The dynamic-characteristic video data analysis is targeted at action detection and recognition in the video stream from a camera.With the increment of the requirement for a video surveillance system,the video analysis for the static and dynamic characteristics in video surveillances has also attracted more and more attention from industry and academia.Nowadays,for the research in the term of static video characteristic,the failures in a video surveillance system will cost much more time of the operation and maintenance team to identify and locate the video data that has been stored in the cloud,and discover the failure of a video surveillance system.When the end cameras of a video surveillance system reach a large scale,it is difficult to guarantee the real-time performance of the video surveillance system.Meanwhile,the video data with failure waste the storage space in the cloud.For dynamic-characteristic video analysis,the video surveillance system is only used to record video data,including dynamic behavior rather than deciding on the behavior in a scene.When an anomaly action occurs,it takes a lot of manpower to search video data containing interesting frames from large-scale video data.If the analysis approach to the dynamic video characteristic data is applied to the behavior recognition and deployed in the cloud computing model,a large amount of video data from edge cameras is transmitted to the cloud,which consumes a large amount of bandwidth in the large-scale video surveillance system.It is also difficult to guarantee the real-time of the anomaly action recognition,and then,it increases the transmission bandwidth of the network.The proposed edge computing framework has a better optimization effect on the processing of video surveillance data.Edge computing is used to perform data processing at or near the data source in video surveillance systems.This method can reduce the amount of data transmitted and obtain static or dynamic characteristics of the surveillance video data.It also improves the real-time surveillance video data.Primarily,it played a key role in the extraction of real-time dynamic characteristics.First,this paper proposes the video usefulness model for a large-scale video surveillance system.Video usefulness,that is a model,is used to detect the static video characteristic in a video surveillance system.In the video usefulness,the data failure types of different monitoring systems distributed in the three domains,i.e.,edge,user,and cloud,and different detection methods are proposed according to each monitoring system data failure type.We proposed a new scheduling strategy according to the data failure types of different monitoring in video surveillance systems.Experimental results verified that different detection methods running on the edge could not affect its accuracy,and it can improve the average repair time in the video surveillance system.The detected invalid video data does not need to be uploaded to the cloud,which can increase the storage space of the cloud and reduce the utilization of network bandwidth.Secondly,dynamic-characteristic video analysis can be carried out on the video data that has no failure.This paper proposes a dynamic analysis for video framework,which is divided into three parts,namely device,edge server,and cloud.Each part includes a convolution layer,a full connection layer,and an exit point.The process of dynamic video characteristic analysis in the framework is roughly as follows:(1)Video data that is read from the camera,and the video data is transmitted to the dynamic video characteristic analysis framework of the device,when the recognition rate of the device meets the needs of the user,the result will be sent to the user and passing the exit point of the device end without transmitting the output result of convolution layer of the device to the edge server.(2)The output result of the last convolution layer of the device end is transmitted to the edge server,the output passes the dynamic video characteristic framework when the recognition rate in the edge server meets the demand,the result will be sent to the user,and exits at the edge server.The output result of the last convolution layer on the edge server does not need to be transmitted to the cloud.Otherwise,the output of the last convolution layer on the edge server will be transmitted to the cloud,and the recognition result will be obtained through the dynamic video characteristic analysis framework of the cloud,and the recognition result will be sent to the user and exit.This paper simulates the device,edge server,and cloud by configuring different computing resources on the same testbed,and validates it on the basis of the same.The dynamic video characteristic analysis framework can reduce the execution time while ensuring the same accuracy. |