Font Size: a A A

Research On Cross-Modality Person Re-Identification Algorithm Based On Deep Learning

Posted on:2023-08-24Degree:MasterType:Thesis
Country:ChinaCandidate:Y W PanFull Text:PDF
GTID:2558307097978819Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
Cross-modality person re-identification is a task of matching person images collected by infrared cameras and visible light cameras.It makes up for the limitations of traditional single-modality person re-identification(Re-ID)in night surveillance scenes and enhances the applicability of Re-ID technology in various applications.In the task of cross-modality person re-identification,due to the different reflective visible spectrums and sensed emissivities of visible and thermal cameras,there is a huge cross-modality discrepancy among person images.In addition,affected by many factors such as illumination changes,free postures,different viewpoints,and occlusions,images of the same person may have large appearance differences.All these problems make the task of cross-modality person re-identification extremely challenging.This paper firstly studies and summarizes the development direction of the existing algorithms of cross-modality person re-identification,and has a clear cognition of the research background,research situation,and research trends of crossmodality person re-identification.In addition,in order to deeply learn the cross-modal pedestrian re-identification algorithm based on deep learning,this paper starts from several classic algorithm directions,discusses the current advanced algorithms and their principles in this direction,and considers their shortcomings and possible improvement directions.Through the above basic research work,we realized that the current algorithm mainly has several problems:(1)They generally use a two-stream convolutional neural network,which complicates the network;(2)Failure to directly constrain feature-level cross-modality discrepancy;(3)Ignore the mining of local details of persons.Therefore,the task of this paper is to design a lightweight feature extraction network and extract person features with cross-modality consistency,discrimination,and robustness.For the above problems,this paper introduces two improved algorithms and frameworks:(1)We propose a feature extraction framework based on the sample-center loss function.We adopted a lightweight two-stream feature extraction module,which effectively reduces the network complexity.In addition,we designed a new samplecenter loss function,which is dedicated to eliminating the feature-level cross-modality differences,ensuring the intra-class similarity and inter-class discrimination of person features,and helping the cross-modality feature extraction framework to learn modality-consistent and discriminative person features;(2)We construct a BDB-based dual-embedding branch feature extraction framework to enhance the learning of local attention features,so as to extract more robust person features.We evaluate our method on the standard cross-modality person re-identification datasets RegDB and SYSU-MM01,and conduct a series of ablation experiments,comparative experiments,and visualization analyses to verify the proposed method.Extensive experimental results show that our method can helpfully improve the performance of cross-modality person re-identification.
Keywords/Search Tags:Deep Learning, Person Re-identification, Metric Learning, Cross-modality Retrieval
PDF Full Text Request
Related items