| Smart city is the outcome of a new round of information technology reform and the further development of knowledge economy.It is a manifestation of the deep integration of industrialization,urbanization and informatization and the progress to a higher stage.Urban3 D model is a powerful data support for smart city construction.The integration of urban3 D model and surveillance video information can not only realize automatic and continuous update of urban 3D model data in a low-cost way,but also be an effective way to maximize the use of existing data resources to obtain urban 3D real scene model.As a means of effective monitoring,video camera is already in the urban security,management plays an irreplaceable role,but the traditional video monitoring is isolated,the lack of correlation between application limitation,city of 3D model and monitor the video information fusion can be deemed safe prevention and geographic information industry in an organic integration of basic data information level,Based on it,it can not only serve the construction of urban intelligent video surveillance system,but also actively promote the mutual cooperation between security and electric power,exploration and design and other geographic information related industries and enterprises.In addition,the video information fusion monitoring of urban 3D model can be thought of as a unified geographic frame of reference,time benchmark and the internal logic relationship of ordered big data,on the basis of the will to promote deep learning in building model,the training mechanism,efficiency and so on various aspects of the new exploration,has the important theory significance and research value.Based on the 3D model of the city and the spherical panoramic video output of two types of commercial panoramic cameras(fisheye and multi-lens),this paper studies the automatic fusion method and key technical issues of the two cameras.The main contents include:(1)Fisheye camera calibration: A self-calibration method for fisheye camera in natural scene was proposed.The strict calibration equation was established to solve the internal parameters of fisheye lens by taking full advantage of the projection ellipse constraint and the internal geometric characteristics of ellipse under spherical perspective projection.Firstly,a fisheye spherical imaging model with complete Interior Orientation Parameters(IOPs)was established.Then,according to the correlation between the equivalent focal length f of fisheye lens and the optical radial distortion parameters(k1,k2),the spatial linear radial distortion projection ellipse constraint(RDPEC)was derived,and the high-precision calibration equation of fisheye IOPs was established based on it.Finally,the initial parameters were obtained based on the geometric characteristics of fisheye image outline ellipse(FIOE),and the IOPs were optimized by the least squares method using the projection of spatial lines in the fisheye image as the observed values.Experimental results show that the checkerboard multi-view calibration accuracy of the proposed algorithm reaches 0.1pixel,which is similar to the accuracy of the online calibration tool based on planar calibration reference.The calibration accuracy of online fisheye image without calibration reference is about 1/3 pixel on average,which is superior to existing algorithms in parameter integrity and overall accuracy.The proposed algorithm has a simple calibration process and does not depend on specific reference objects.It has good application value for fisheye cameras that enter long-term and continuous working state after installation.(2)Fisheye video correction: A fisheye image view-dependent perspective correction(VDPC)model was proposed to achieve high-precision planar perspective correction of different regions of fisheye image by adaptive selection of correction plane,and to obtain3 D local roaming visual effect of browsing in hemispheric space.The test results of fisheye images from different perspectives show that compared with traditional perspective projection conversion(PPC)technology,the VDPC model in this paper can perform planar perspective correction for fisheye images in different regions more evenly and retain the details of fisheye images more flexibly.When running on a pure CPU processor,the algorithm can generate 512×512 pixel corrected images at a speed of 58 fps.With a normal GPU processor,correction images of 1024×1024 pixels can be generated at 60 fps.Therefore,the algorithm can fully meet the requirements of real-time video processing,which is conducive to the related application of fisheye image in many scenes.(3)Multi-lens panoramic camera(MPC)calibration: Considering that MPC calibration is dependent on high-precision 3D control information(environment),and the sub-camera has little field overlap and it is difficult to observe the calibration reference at the same time,a MPC calibration method combining the theory of plane reference and rotary photogrammetry is proposed and implemented in two stages.First,combining plane grid and rotating platform coordinate system set rotating photography strictly equation,on the basis of the side outside the camera and initial value is given and with the rotation of the controllable photography "expanding" a single 2D plane grid reference calibration control range of the MPC,so as to realize side cameras and plane grid image external adjustment to optimize the parameters of the beam method is used to solve;Then turntable rotating coordinate system origin shift to the side camera photography center geometric centre of gravity is to establish the MPC space coordinate system,the center of the rotating photography are added to link between the camera and the side camera as redundant observations,with side more depending on the geometric relationships between the camera to the center of the camera outside the implementation of adjustment to optimize the beam method is used to solve the refs.The experimental results show that the average reprojection error of plane reference is less than 0.5 pixel,and the relative orientation error of the same point between sub-cameras is about 1 pixel.Therefore,the proposed method can achieve MPC high-precision combined calibration only by using a common 2D NC turntable and a single checkerboard grid,which has low cost,simple operation and low application environment requirements.It successfully gets rid of the dependence on 3D calibration field,and has good application value.(4)Multi-lens panoramic video stitching: Aiming at the disparity artifact problem existing in the actual spherical panoramic video output of MPC indoor calibration parameters,an adaptive seamless spherical panoramic video generation method was proposed and implemented in two stages.The first stage in real scene video overlap area of the same name as the observed value,points by minimizing the stereographic projection center point to the same pixel corresponding spherical space Angle error equation to least squares estimate of stereographic projection parameters,thus reducing child photography camera center with stereographic projection misalignment and scene depth change on the MPC spherical panoramic video output quality;In the second stage,the TPS model was established with the spherical reprojection geometry of MPC sub-camera video as the global transformation and the same image point on the video Mosaic line as the control point,so as to realize the direct pixel mapping from MPC sub-camera video to spherical Mosaic video and minimize the pixel Mosaic error in the video overlapping region.Experimental results show that the proposed method can achieve seamless MPC spherical panoramic video generation by combining the scene content and camera parameters,and the calculation is simple and efficient,which fully meets the requirements of MPC high frame rate video output.(5)Real-time fusion of Panorama video and urban 3D model: Aiming at the viewpoint discontinuity and spatial information inconsistency between multiple panoramic videos,a real-time fusion method of panoramic video and 3D real scene in wide-area environment was proposed.Firstly,the coplanar constraint of traditional "straight line" photogrammetry was extended,and the simultaneous calibration of external parameters of multiple panoramic cameras was realized by taking the horizontal orthogonal parallel space group of lines as the control condition and combining with vanishing point geometry,so as to achieve accurate spatial registration of panoramic video in wide area environment.Referenced by the ideas of photogrammetry digital differential correction of spherical projection of panoramic video can be converted to parallel projection,a unified space coordinates is shoot video(ground),and from surveillance cameras installed do not move,the geometry relations remain the same application characteristics of photography,the fusion of video and 3D model is divided into two stages: offline and in-situ.Firstly,a lookup table is generated by offline calculation of registration parameters to save the mapping relationship between panoramic video pixels and scene model location space.Then,combined with the lookup table,video main/sub-code stream characteristics,3D scene viewing related loading scheduling strategy and GPU parallel computing,video textures are generated online and automatically and real-time mapped to the scene ground.The verification results of a gas station scene show that the proposed method can achieve high-precision spatial registration and efficient fusion of panoramic video.The average registration accuracy of nine-channel panoramic video is less than 0.5 pixels,and the fusion efficiency is no less than 30 fps,with good visual quality.It is of great significance and application value to effectively organize a large number of surveillance cameras to serve social production and life and to carry out innovative applications. |