Font Size: a A A

Non-local Attention Mechanism And Multi-Supervised Feature Aggregation Block Fusion Network For Salient Object Detection

Posted on:2021-03-15Degree:MasterType:Thesis
Country:ChinaCandidate:L D ZhouFull Text:PDF
GTID:2428330614953822Subject:Software engineering
Abstract/Summary:PDF Full Text Request
The salient object detection is implemented according to the attention mechanism of biological vision,which can filter out most of the unimportant background information in the image,thereby highlighting the salient objects in the image,often as a preprocessing step in the field of computer vision.With the emergence of deep neural networks and fully convolutional neural networks,great progress has been made in the detection results of salient targets.This paper proposes a new deep fully convolutional neural network structure,named as a non-local attention mechanism and multisupervised feature aggregation block fusion network,which aims to fuse the rich features of each layer of feature aggregation blocks.In addition to the features of this layer,the feature aggregation block has features of other layers,that is,each feature aggregation block has both strong semantic information of the deep network and detailed features of the shallow network.In the top-down fusion process,the residual information of each layer can be learned like Res Net.At the same time,a non-local attention mechanism is introduced to improve contextual relevance,and multiple auxiliary supervisors are connected to intermediate steps,so that the network is more convenient to optimize and accelerate convergence.The innovations and contributions of this article are as follows:(1)Design a brand-new aggregate block to contain deep high-level semantic information and shallow detailed features,so that the feature map of each layer of the backbone network is expanded into aggregate blocks containing other layers,and the interior of the aggregate block can be realized Complementarity of high-level semantics and detailed features.(2)Deep-level high-level semantic feature aggregation blocks are melt-connected with shallow feature aggregation blocks from top to bottom,so that the output layer can accurately locate salient targets,and the detailed features of feature aggregation blocks can optimize salient targets Silhouette border.(3)Non-local attention mechanism and multi-supervised feature aggregation block fusion network In the top-down fusion process,non-local attention mechanism and multi-supervision are connected to the intermediate step of the network.In the fusion process,more features are obtained by clustering blocks,and due to the introduction of non-local attention mechanism and multi-supervision,the network is faster in weight update and convergence speed,and the resolution of salient target detection results is higher.
Keywords/Search Tags:Salient object detection, Feature aggregation block, Non-local attention mechanism, Multiple auxiliary supervision
PDF Full Text Request
Related items