Remote sensing is an important technical means to collect earth data and its variational information.Optical remote sensing image records the electromagnetic spectrum characteristic information of land cover.At present,optical remote sensing image interpretation technology has been widely used in many fields such as national land resource management,precision agriculture,atmospheric environment detection,natural disaster prevention,etc.Optical remote sensing image interpretation can be roughly divided into two directions: detection of interested targets and extraction of typical land cover.Among them,the extraction of typical land cover has always been a difficult problem in optical remote sensing image interpretation,and the traditional segmentation methods are greatly limited in solving this problem.In recent years,with the continuous development and progress of computer technology,deep learning has also achieved excellent results in the field of computer vision,which also provides a new perspective to solve the problem of extracting typical land cover from optical remote sensing images.With the rapid development of remote sensing technology and the improvement of optical remote sensing image acquisition methods,the collected information is more and more detailed,which brings both opportunities and challenges to the extraction of typical land cover from optical remote sensing images.In view of the characteristics of current optical remote sensing image data,this paper proposes a series of typical land cover extraction algorithms of optical remote sensing image by using the deep learning method,dedicating to improving the accuracy of typical land cover extraction of optical remote sensing image.The main research contents are as follows:A method based on the combination of image classification network and semantic segmentation network.Semantic segmentation network is usually divided into subsampling part and upsampling part.Image classification network conforms to the standard of subsampling part,and the design is more refined,which can extract more sufficient features.In this method,Mix Net,an image classification network which is multi-scale lightweight,is used for subsampling to obtain multi-scale feature information.Then the upsampling part of U-Net,a traditional semantic segmentation network,is used for image restoration.The use of lightweight network reduces the amount of network computation,improves the accuracy while ensuring the training speed of network,and provides space for the addition of subsequent modules.A multi-flow network method for land cover extraction based on dual feature extraction and fusion.This method uses dual feature extraction and fusion to construct a network with multiple information flow paths from the input end to the final output end,so that the network can capture more complex features,thereby generating higher accuracy.At the same time,the loss function combining cross entropy loss and Dice loss are used to integrate their respective characteristics and advantages,making it more accurate to measure the distance between the predicted results and the real results,thus improving the accuracy.A method of land cover extraction based on multi-directional network combined with Bo T module.In the extraction of typical land cover,reasonable screening of feature information is very important.Attention mechanism can effectively capture the multi-directional position information worthy of attention in the image,which provides effective feature information for the extraction of typical land cover,thus improving the accuracy of the final extraction.Bo T block is a powerful attention mechanism module.Using Bo T block to replace some modules in the network can reasonably filter the acquired feature information and obtain image features containing multi-directional information can be obtained,thus improving the final extraction accuracy.This work was supported in part by the Key Scientific Technological Innovation Research Project by Ministry of Education;the National Natural Science Foundation of China under Grant 61671350,61771379,61836009;the Foundation for Innovative Research Groups of the National Natural Science Foundation of China under Grant61621005;the Key Research and Development Program in Shaanxi Province of China under Grant 2019ZDLGY03-05. |