| Color video can provide colorful and dynamic visual experience for people,therefore,it has already dominated the network media.Because of the limitation of imaging techniques at the early stage,people can only capture greyscale videos.How to display some valuable greyscale videos to people with color information so as to improve the visual experience is a challenging problem.As an important task in computer vision,colorization is to restore the color information for greyscale videos/images.Traditional colorization methods usually depend on manual annotation,which is expensive.With the rapid development of artificial intelligence,a new solution based on deep learning emerged.In this thesis,we studied the deep learning-based colorization methods.The major contribution and novelty are summarized as follows:1.We first investigated the existing colorization techniques,including traditional colorization methods and the emerging deep learning-based methods.We categorized existing colorization methods and analyzed their advantages and disadvantages so as to establish theoretical basis and find novel ideas for the following work.2.We propose a video colorization method based on channel attention mechanism.The method uses multiple video frames as input.First,the features of greyscale video frames and colorful reference frames are extracted by using the feature extracted network.Second,channel attention is used to impose different weights on different feature channels to emphasize more important features.Finally,the chrominance component of greyscale video frames is restored by using the colorization network.To maintain the time consistency between frames and colorization performance in long video sequences,we use 3D convolution to extract spatial and temporal consistency information between video frames.Experimental results show that the proposed method can preserve the temporal and spatial correlation between video frames and thus improve the colorization performance especially for long video sequences.3.We then propose a video colorization method based on self-attention and motion feature.To address the difficulty that moving objects are hard to be well colorized,we propose to extract motion feature from successive video frames to guide the colorization.First,the motion feature is extracted from the luminance components of greyscale video frames and the reference video frames.Second,the relationship between luminance and chrominance of reference frames are extracted,based on which the relationship between luminance and chrominance of greyscale video frames is then obtained by combining the motion feature.Finally,the chrominance of greyscale video frames is predicted by the colorization network.Experimental results show that the video colorization performance can be improved by adding the motion feature,especially for moving objects. |