Font Size: a A A

Fusion Representation And Recognition Of Finger Based On Graph Structure Features

Posted on:2024-07-17Degree:MasterType:Thesis
Country:ChinaCandidate:Y H WangFull Text:PDF
GTID:2558307178982499Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Fingerprints have grown in popularity as a biometric trait in research and applications due to their excellent portability.However,with their increasing popularity,fingerprints,the unimodal biometric cues,are frequently interfered by devices and environment.Against such trouble,multimodal biometric fusion recognition emerges as the times require.It improves the distinctiveness of representation by combining multi-source information to compensate for the fault-tolerance.However,there are still issues in dimension size,texture posture,and feature distribution differences for fusing the three finger modalities,fingerprints,finger veins,and finger-knuckle-prints.These issues seriously restrict the performance of trimodal finger fusion.In this research,we explore trimodal finger fusion recognition framework for fingerprints,finger veins,and finger-knuckle-prints based on heterogeneous graph structure.We analyze the current crystal-like alignment graph fusion strategies applied to the three finger modalities and point out their limitations.Specifically,on the edge,the adjacency is passive and insufficient since alignment graph fusion only permits each node to connect one counterpart in another modality,which leads to passive and incomprehensive connections.On the node,the three modalities’ nodes are stacked as a whole,which ignores their individual characteristics and leads to an isomorphic fused representation.Therefore,we specifically consider the property and interaction of trimodal finger features,design the trimodal finger heterogeneous graph fusion,and improve the fusion representation and recognition performance.In this study,the main contributions are as follows:Firstly,we propose bonding fusion for the three finger modalities inspired by chemical atomic bonds.We construct a shared set of node number for all graph nodes by pre-clustering and classification on the three finger modalities.Therefore,we establish a global connection for all nodes to describe the correlation among different finger modalities.Additionally,Delaunay triangulation representing position connectivity information is incorporated for intra-modal edges to differentiate the two types node connections.We incorporate cross-modal interactions as well as the inherent characteristics of the three finger modalities and improve the rich fused representation.The research potential of heterogeneous modeling is confirmed by conducting experiments on existing trimodal finger database.Secondly,we propose an end-to-end heterogeneous graph fusion neural network,Deep Attention Graph Fusion Recognition(DAGFR),for the three finger modalities.To address the limitations of bonding fusion,we design two plug-and-play modules,Hierarchical Convolution-based Node Encoder(HCNE),and Two-stage Trimodal Joint Attention(TTJA).HCNE extracts initial image features and constructs graph node representation,promoting intra-modal information stickiness and hierarchical perception.TTJA is used to update the data distribution of diverse modalities and improve the fused representation’s intra-modal contextual perception and inter-modal dynamic interaction.Finally,we conduct experiments on different trimodal finger databases to evaluate this learnable model.The multi-classification performances on closed-set recognition confirm the feasible and advantageous potential of our approach.
Keywords/Search Tags:Finger biometrics, Multimodal fusion, Graph representation learning, Attention mechanism
PDF Full Text Request
Related items