| There usually exist rain streaks when capturing images in rainy weather,which destroys the object information.This may degrade the performance of the subsequent computer vision system or obtain undesirable photos.To remove rain streaks from a single image is a challenging task since this is a highly ill-posed problem and there may be multiple solutions.Although deep neural networks trained with synthetic datasets have achieved good results in some scenarios,the lack of generalization ability is still the teaser.In this paper,we find one of the reasons and propose a new network with a more powerful generalization performance.When investigating related work,we find that almost all existing methods resolve this problem by predicting the residual of rain streaks and obtain clean background by subtracting the residual from the input image.Because most of the values in the residual layer of rain streaks are closer to zero,which is easier for the network to fit this mapping based on the residual learning theory.However,we found that there exist few pixels with high residual value compared with those close to zero,which may cause the network to overfit the rain streak patterns with high residual value in the synthetic datasets and decline in removing new patterns with high residual value in real scenes.Notwithstanding the quantity of these rain streaks is little,they usually lead to significant damage to the structure due to high contrast with the background.We find that if directly predicting the background,the new mapping of these rain streaks with high residual value becomes easier to fit because their original value in background is usually closer to zero,which is more conducive for structure recovery.Moreover,with background estimation,it enables the network to learn semantic feature map related to the objects in images,in favor of the information reconstruction for severely damaged areas.Experiments prove that background estimation has a powerful generalization ability compared with residual estimation.Although background estimation leads to better generalization ability,the recovered images usually lost details or blur in some places.Inspired by a traditional method that background estimation and residual rain streak estimation optimize with each other,we propose a coupled residual rain streak and background estimation network.Through the guidance of residual rain streak,the network is able to reduce the mistaken deletion of details similar to rain streaks in background estimation.In order to make network pay more attention to those areas which are badly damaged or have contributed to recovery,we propose a separable element-wise attention mechanism composed of a channel attention branch and a spatial attention branch.With slight computation and parameters,it is injected into all the convolutional blocks.Extensive experiments demonstrate that the proposed method outperforms on synthesized rain datasets and the real-world scenarios. |