| The recent emergence of deep learning for characterizing complex patterns in remote sensing imagery reveals its high potential to address some classic challenges in this domain.Typical deep learning models require extremely large datasets with rich contents to train a multilayer structure in order to capture the essential features of a remote sensing image.Compared with the benchmark datasets used in popular deep learning frameworks,however,the volumes of available remote sensing datasets are particularly limited,which have restricted deep learning methods from achieving full performance gains.In order to address this fundamental problem,we propose three types of data augmented deep learning methodologies for three remote sensing image analysis scenarios,i.e.remote sensing scene classification,oil spill segmentation and typhoon cloud system prediction,separately.First of all,we introduce a methodology to not only enhance the volume and completeness of training data for any remote sensing datasets,but also exploit the enhanced datasets to train a deep convolutional neural network that achieves state-of-the-art scene classification performance.Specifically,we propose to enhance any original dataset by applying three operations – flip,translation,and rotation to generate augmented data – and use the augmented dataset to train and obtain a more descriptive deep model.The proposed methodology is validated in three recently released remote sensing datasets,and confirmed as an effective technique that significantly contributes to potentially revolutionary changes in remote sensing scene classification,empowered by deep learning.Secondly,we propose the second type of data augmented deep learning methodology,i.e.a novel typhoon cloud system prediction method via generative adversarial networks(GANs).Specifically,we develop adversarial prediction model consists of a generator and a discriminator.The generator generates the continuous future cloud images by learning the evolution trend of typhoon clouds from multiple continuous historical cloud images.In this way,the generator completes the visual prediction for typhoon clouds.On the other hand,the discriminator distinguishes the generated future cloud images from the real ones.In addition,we adopt multi-scale generator/discriminator and a gradient-based loss function to improve the quality of the generated cloud images.The effectiveness of the proposed method has been evaluated in the real satellite cloud images.Finally,we propose the third type data augmented deep learning methodology,i.e.an automatic oil spill segmentation method in terms of adversarial f-divergence learning.We exploit f-divergence for measuring the disagreement between the distributions of ground-truth and generated oil spill segmentations.To render tractable optimization,we minimize the tight lower bound of the f-divergence by adversarial training a regressor and a generator,which are structured in different forms of deep neural networks separately.The generator aims at producing accurate oil spill segmentation,while the regressor characterizes discriminative distributions with respect to true and generated oil spill segmentations.It is the coplay between the generator net and the regressor net against each other that achieves a minimum of the maximal lower bound for the f-divergence.The adversarial strategy enhances the representational powers of both the generator and the regressor and avoids requesting large amounts of labeled data for training the deep network parameters.In addition,the trained generator net enables automatic oil spill detection that does not require manual initialization.Benefiting from the comprehensiveness of f-divergence for characterizing diversified distributions,our framework is able to accurately segment variously shaped oil spills in noisy synthetic aperture radar(SAR)images.Overall,in this thesis,we propose three types of data augmented deep learning methodologies for effective remote sensing image analysis with limited training data. |