| Surveillance system is an important way for security monitoring, smart city, intelligent transportation, and so on. Existing surveillance systems are able to recognize target, track target, etc. In real-world bad weathers, such as haze,rain, snow, always cause serious degradation on outdoor images, which will lead to the failure of feature detection and the invalidation of feature detection-based algorithms. Thus, multi-class weather classification, image haze and rain removal are fundamental problems in all-weather intelligent surveillance sys-tem. In this thesis, we focus on how to classify the multi-class weathers, and further remove the degradation caused by bad weathers in images. The main contributions are as follows.For multi-class weather classification in single image, we propose a multi-feature fusion based method. Most of the traditional methods focus on two-class weather (sunny-rainy and sunny-cloudy) classification from the fixed scene.These methods are unable to deal with other scenes and multi-class weathers.Thus, we design a variety of discriminative features for each kind weather such as sky, shadow, dark channel pyramid, HOG based rain streak, and snowflake noise. Then, we utilize multi-kernel learning to choose the best combination of the features. Moreover, we built the first image dataset of this filed as a benchmark for related works. Experimental results demonstrate the proposed method can significantly improve the classification accuracy.For single image haze removal, we propose an adaptive contrast enhance-ment method. Due to conventional methods are usually based on assumptions or priors, they usually fail on the images which cannot satisfy their assump-tions or priors. Furthermore, these methods scarcely concern the concentration of haze, which makes the light haze images tend to over-enhanced or dense haze images tend to under-enhanced. In order to solve these problems, we con-struct a mapping from image optical features to air quality. By estimating the air quality, we can compute the concentration of haze in an image. Then, we propose contrast loss and information loss. By the constraint the two loss func-tions using the haze concentration, our method can adaptively remove haze in different images. The experimental results show that the proposed method can significantly improve the haze removing performance.For single image rain removal, we propose a dark channel guided convo-lutional neural network-based method. Existing deraining approaches always filter out the rain streak but ignore the haze caused by the high humidity air in the rain images. Nevertheless, we content that haze and rain removals in the rain images are tightly intertwined, since the haze can also dramatically reduce the visibility of the scenes as rain streak. To leverage the coupled nature of these two tasks, we propose a rain-haze blending model, then design a novel dark channel guided convolutional neural network to simultaneously remove the concomitant rain streaks and haze in the rain images. We achieve this goal by adopting the multi-task-based deep neural networks to learn a mapping from rain image to clear image. There are two kinds of loss that the proposed DC-Net needs to jointly optimize. One is the color loss that sums the RGB channel dif-ferences between rain and original images, which is mainly utilized to remove the rain streaks. The other is the dark loss that is used to compute the differences in the dark channel, which preferably represents the bad effect of haze in the rain image. With the proposed method, we achieve the best rain removal perfor-mance with the least time-cost compared against the state-of-the-art methods. |