| Super-resolution restoration methods are an extremely important research topic in the field of image processing.This method analyzes the image signal through software algorithms to restore the data information from its original low-resolution state to a high-resolution state.Super-resolution technology has unique advantages and has been widely applied in fields such as medical imaging and video surveillance.In addition to improving image quality,super-resolution restoration also effectively aids the processing of tasks such as object detection and image segmentation in computer vision.Therefore,research on super-resolution restoration algorithms holds significant importance.This paper aims to study deep learning-based video super-resolution restoration algorithms.By using multi-frame alignment and fusion of adjacent frames,and introducing generative adversarial networks,video super-resolution restoration is achieved.After the calculation and restoration by the algorithm network,the texture of the video frame image is clearer and contains richer information.The main work of this paper is as follows:Firstly,in the field of video super-resolution restoration,traditional methods usually involve separately conducting super-resolution restoration reconstruction for each frame image.This paper proposes a method based on bidirectional adjacent frames.The proposed method utilizes the temporal information between adjacent frames to retain time sequence outputs in feature maps,and performs spatial alignment and fusion for the current frame and the two preceding and following frames.This approach can enhance the fusion degree of information between the two adjacent frames,resulting in restored images containing richer information.Furthermore,this paper introduces an upsampling reconstruction method based on a generative adversarial network structure.Compared with traditional methods,this network structure is proposed without changing the network details and parameter calculations,ensuring that the image remains realistic and not overly smooth after upsampling,thus solving the problem of unclear and unrealistic image boundaries caused by the existence of only a single loss function.In this paper,we use a relative average discriminator composed of three loss functions as the basic element,and experimental results show that the texture of the images obtained after upsampling and reconstruction is clearer and more realistic.Finally,to address the challenge of effectively fusing temporal sequence information with individual frames,this paper proposes a gradually merging method based on a multi-frame attention mechanism.This method combines adjacent frames with an attention mechanism to achieve feature fusion and fully learn the information between adjacent frames.By using this approach,the difficulty of feature fusion is significantly reduced,which effectively improves the super-resolution restoration effect. |