Sitting among the fundamental tasks in computer vision,stereo visual odometry is emergently needed in autonomous driving and augmented reality etc.Its accuracy highly relies on the accuracy of local feature correspondence,which is typically accomplished via extracting and matching hand-craft local descriptors.Our method introduces Generative Adversarial Network to the task of visual odometery and generate deep feature descriptors and feature points simultaneously through semi-supervised learning.We use semantic information for geometric consistency matching.This is a kind of method uses semi-supervised learning and end-to-end semantic geometry consistency model for visual odometry.Firstly,we train an end-to-end feature point and deep descriptor generator based on semantic-geometry consistency.Secondly,we are the first leverage generative adversarial network training for visual odometry.Thirdly,our method is different from the previous full-supervised and unsupervised deep features,our framework is the first semi-supervised visual odometry that adds semantic information to geometric matching.Experiments show that the Generative Adversarial Network can reduce the impact of data limitations and superior performance in feature fitting.It also verifies that adding semantic information to geometric consistency matching can improve visual odometer performance.We have quantitatively evaluated on the challenging KITTI odometry benchmark.Comparing to the state-of-the-art schemes,the proposed semantic-aware generative adversarial network visual odometry(sGAN-VO)achieves superior performance,i.e.,reducing pose estimation error over ORB-SLAM2 and GDVO.The results show that sGAN-VO is second place for stereo visual odometry. |