| Video moving object segmentation,as direction in the field of computer vision research,has important value in practical applications.However,due to the difficulties of video data annotation,lighting,occlusion,and other factors,effective segmentation of video moving objects has great challenges.Cosegmentation uses the collaborative information of common objects between images to segment,providing a new research direction for video moving object segmentation,largely eliminating the impact of complex backgrounds on segmentation accuracy.We conducted the following research on how to effectively segment moving target objects in video frames using cosegmentation technology:(1)Research on co-segmentation of moving objects in optical flow field level set video frame images.Aiming at the problem of poor segmentation of video objects due to continuous changes in posture and complex environmental impacts,a co-segmentation algorithm combining optical flow field and level set was proposed to extract common moving objects from multiple frames of images.Firstly,the L-K pyramid optical flow field is combined with the level set method to estimate the initial position of the moving foreground and obtain the initial contour of the evolving moving target;Secondly,a motion constraint based on L-K optical flow field is designed to control the evolution speed and direction of the level set curve,and the final moving object co-segmentation is performed by combining the inter frame moving object saliency map;Finally,a comparative experiment was conducted using five evaluation indicators on the Segtrack dataset with four existing typical algorithms.The results show that the proposed method can effectively segment moving objects in complex backgrounds.(2)Research on co-segmentation of video saliency prediction graphs based on visual attention.Aiming at the low efficiency of saliency target detection due to the large scale of current deep learning saliency models and the neglect of visual attention potential mechanisms,a co-segmentation algorithm based on visual attention based video saliency prediction graph was proposed.Firstly,the Conv LSTM module is used to extract temporal features between frames and integrate them into a lightweight encoder-Conv LSTM-decoder network structure;Secondly,visual attention bias is extracted using domain adaptive center weights,which are fused with the output features of the decoder network.After training and reasoning through a smooth layer fuzzy network with Gaussian convolution kernel,a co-saliency prediction map of the video frame image is obtained,and then co-segmentation is performed;Finally,a comparative experiment was conducted on UCFSports video dataset with other deep learning methods,and the results showed that the proposed algorithm model has a smaller processing scale and faster running speed,while the prediction accuracy has not decreased.(3)Based on the above research,a prototype system for video frame image moving object co-segmentation was designed and developed,which can effectively validate the proposed method. |