| Vehicle re-identification(Re-ID)is a new technology that has emerged in the field of intelligent video analysis in recent years.It is mainly used for vehicle image recognition under public safety videos.The core goal of vehicle re-identification is to identify and match vehicles traveling across time and space under a non-overlapping field of view monitoring network.This is important for the development of public safety,safe city,smart transportation,smart city,and driverless industries.It has a driving effect,so it has important research and application value.With the promotion of deep learning technology,research in this area has made remarkable achievements.However,due to the complexity of data,scenes,and environment,vehicle re-identification is still not well resolved.Focusing on the above-mentioned complex factors,this dissertation discusses the problem of vehicle re-identification from the perspective of knowledge embedding and spectrum fusion.Specifically,the research and analysis can be divided into the following sections: camera-aware intra-instance relation embedding for robust vehicle re-identification,attribute-based enhancement and state-based weakening for vehicle re-identification,multi-scale knowledge-aware transformer for vehicle re-identification,and heterogeneous spectrum fusion for vehicle re-identification.The blossom of deep convolutional networks has witnessed the development of vehicle re-identification by leveraging classification-related loss functions(e.g.,crossentropy loss and triplet loss).However,the global structure of vehicle instances might be harmed due to the large appearance variations caused by different cameras and viewpoints.To handle this problem,we propose a Camera-Aware intra-instance Relation Embedding(CARE)framework for robust vehicle Re-ID.First,we use camera and perspective annotation information to mine the graph relationship between positive samples.Then,we design a novel segmented list loss,using graph relations to assign different learning weights to positive samples in different lists.Extensive experiments on benchmark datasets demonstrate the superior performance of the proposed CARE against state-of-the-art vehicle Re-ID methods.Vehicle re-identification is a crucial task in smart city and intelligent transportation,aiming to match vehicle images across non-overlapping surveillance camera scenarios.However,the images of different vehicles may have small visual discrepancies when they have the same attributes,the images from a vehicle may have large visual discrepancies with different states.To consider both of the above challenges,We propose an Attribute and State Guided Structural Embedding Network(ASSEN)to achieve discriminative feature learning by attribute-based enhancement and state-based weakening for vehicle Re-ID.Extensive experiments on benchmark datasets demonstrate the superior performance and generalization of the proposed ASSEN against state-ofthe-art vehicle Re-ID methods.Existing vehicle re-identification methods usually suffer from intra-instance discrepancy and inter-instance similarity.The key to solving this problem lies in filtering out identity-irrelevant interference and collecting identity-relevant vehicle details.Toward this end,we propose a novel Multi-scale Knowledge-Aware Transformer(Ms KAT)to train a model guided by knowledge vectors yet is able to disentangle the identity-relevant features and identity-irrelevant features.Extensive evidence demonstrates the proposed Ms KAT achieves new state-of-the-art on three widely-used vehicle re-identification benchmarks.Currently,most works focus on RGB-based vehicle Re-ID,which limits its capability of real-life applications in adverse environments such as dark environments and bad weathers.Infrared(IR)spectrum imaging offers complementary information to relieve the illumination issue in computer vision tasks.Furthermore,vehicle Re-ID suffers a big challenge of the diverse appearance with different views,such as trucks.In this work,we contribute a multi-spectral vehicle Re-ID benchmark named RGBN300,which contains both multi-spectral and multi-view images.Furthermore,we propose a Heterogeneity-collaboration Aware Multi-stream convolutional Network(HAMNet)towards automatically fusing different spectrum features in an end-to-end learning framework.Comprehensive experiments on prevalent networks show that our HAMNet can effectively integrate multi-spectral data for robust vehicle Re-ID in day and night. |