| With the development of economy and advancement of science and technology,more and more surveillance cameras are deployed in the same park and scene.Traditional split screen multi-screen surveillance system has been unable to meet people’s needs: The multi-channel surveillance images separated from each other are not conducive to the user’s analysis and tracking of moving targets;In the face of a large number of two-dimensional surveillance images,it is also difficult for users to establish intuitive three-dimensional spatial perception and global vision of the surveillance scene.Therefore,some scholars have proposed some intelligent surveillance systems based on augmented virtual environment,which superimposes the surveillance video as real information into the virtual three-dimensional environment.They have realized the mapping of video content from two-dimensional to three-dimensional.However,the process of building 3D virtual environment of many systems requires manual modeling and camera calibration,which is time-consuming and laborious.In addition,when these systems deal with overlapping areas of multiple videos,they all use stitching methods,and then superimpose with the 3D scene.The disadvantage of this method is that only a reasonable observation effect can be obtained from the video shooting perspective,and serious distortion will occur in other perspectives.To solve these problems,this paper proposes an augmented virtual environment system integrating multiple surveillance videos in real time.Through this system,users can quickly reconstruct the static three-dimensional virtual scene from the images taken by mobile phones or UAVs.Users can explore freely in the three-dimensional virtual environment which integrates multiple surveillance videos,observe the whole scene from the perspective of bird’s-eye view,and obtain reasonable observation results from any perspective,so as to get a more intuitive and global experience.The main work of this paper are as follows:(1)A fully automatic 3D virtual scene reconstruction process based on 3D reconstruction algorithm is proposed.The modeling and camera calibration process do not need manual intervention,which improves the efficiency;(2)Propose three algorithms: An optimal texture selection strategy based on the shooting angle and the proportion of primitives area is proposed to color the 3D scene,which improves the visual effect of the 3D virtual environment;A multi-video and virtual environment fusion strategy based on dynamic texture is proposed,which is suitable for videos taken in large scenes and high stand.The strategy selects and mixes the optimal video texture in real time according to the user’s observation camera pose,which can effectively reduce the distortion of texture and enable the user to obtain reasonable observation results from any perspective;A multi video and virtual environment fusion strategy based on foreground extraction is proposed,which is suitable for videos taken in small scenes and low stand.This strategy first extracts the foreground,that is,moving objects,from the video,then registers the foreground and generates the corresponding three-dimensional model based on its contour,so as to help users better track and understand the motion of the foreground in three-dimensional space.Experiments in multiple real scenes show that the augmented virtual environment system implemented in this paper can more effectively integrate multiple videos and 3D scenes,and improve the user’s 3D understanding and global perception of multi-channel surveillance video content. |