Font Size: a A A

Research On Remote Sensing Image Matching Based On Visual Local Feature

Posted on:2017-09-16Degree:MasterType:Thesis
Country:ChinaCandidate:B B WuFull Text:PDF
GTID:2382330569998782Subject:Electronic Science and Technology
Abstract/Summary:PDF Full Text Request
Remote sensing image matching is one of the important research topics in remote sensing information processing and application.Image matching is still a challenging task because real images often exhibit complexity and diversity.Because of the image distortion and illumination change invariance and adaptability to the cloud caused by local occlusion is good,local visual feature based matching method is widely used.In this paper,based on the technical framework of remote sensing image matching,we focus on the extraction of local visual features,and then develop a more adaptive and accurate method of remote sensing image matching.The main research contents and achievements are as follows:In order to solve the problem that the SAR image is vulnerable to speckle noise while false features are extracted,and the uneven distribution of features result in the problem of low matching accuracy.This paper proposes a uniform matching method for SAR image matching based on bilateral filters.Firstly,the window mask is constructed by using bilateral filter,and the angular autocorrelation matrix is used to calculate the corners to eliminate the adverse effect of speckle noise.Then,the corner point distribution strategy is improved to ensure that the corners in the image and scale space are uniform distributed.The experimental results show that this method not only improves the repeatability of corner points,but also improves the uniformity of corner distribution,and finally improves the accuracy of SAR image matching.Depth feature in the field of computer vision has been widely proven superior to the traditional manual features.Inspired to study how to extract depth feature from a pre-trained convolution neural networks for remote sensing image matching,based on GoogLeNet depth features an image matching method of UAV is proposed.Firstly,the local feature region is detected,and then the depth feature descriptor is extracted from these regions using GoogLeNet.Among them,the optimal selection of local feature area is obtained by statistical analysis.Furthermore,color information is introduced to further enhance feature expression.Experimental results show that this method can improve the accuracy of UAV image matching,which is better than traditional method based on manual features.Pre-trained convolutional neural networks are not designed and trained for remote sensing image matching,and the extracted depth feature expression can be further enhanced.At the same time,the latest research results show that the convolutional neural network with dual-tower architecture is beneficial to the joint training of feature representation learning and distance measure,which can enhance the feature expression significantly,which has not been widely used in remote sensing image matching.In view of this,this paper designs a convolutional neural network of Siamese dual-tower architecture,and specially trained the network model based on remote sensing image.On this basis,an optical satellite image matching method based on Siamese depth feature is proposed.According to the characteristics of Siamese network,the two towers are designed to share the weight between the network to ensure the consistency of feature extraction;at the same time minimize the contrastive loss function to ensure that the extracted features in the lower dimensions still have strongly distinguishable.Experimental results show that the proposed method has a high accuracy in the matching of optical satellite images,which proves the effectiveness of the proposed method.
Keywords/Search Tags:Remote sensing image matching, Visual local feature, Feature detection, Feature distribution, Deep feature description
PDF Full Text Request
Related items