Font Size: a A A

Research On Image Recognition Method By Spiking Neural Network Based On Attention Mechanism

Posted on:2024-04-04Degree:MasterType:Thesis
Country:ChinaCandidate:J F QinFull Text:PDF
GTID:2568307067972919Subject:Computer technology
Abstract/Summary:PDF Full Text Request
In In recent years,deep learning techniques have made unprecedented progress with the booming development of big data technology,and the dramatic increase in parallel computing power.On the other hand,about 80% or more of the information received by humans is conveyed through visual information,so it is of great practical importance to conduct research on image recognition technology.However,deep neural networks also have obvious drawbacks,as they are incredibly expensive to train and run,compared to the approximately20 watts of power consumed by the human brain.Traditional deep neural networks were born in the context of the von.Neumann architecture,where data storage and computation are separated,whereas biological neural networks in the human brain use impulse signals as carriers,and the network works as a "storage and computation unit",which is the basis of its energy-efficient structure.Therefore,an in-depth study of brain-like spiking neural networks(SNN)is of great importance to reduce the power consumption of large models and to further develop hardware for storage and computation.On the other hand,Self-Attention Mechanisms(SAAM)is one of the hot topics in deep learning.When humans observe things in the outside world,they habitually selectively acquire information based on their needs and interests.Similarly,attention mechanisms allow models to assign different weights to different parts of the input data,allowing them to devote more computation to the parts that require more attention.In this way,the attention mechanism helps the model to extract key information and ignore irrelevant information when processing large amounts of data,avoiding information overload due to excessive computational overhead.In short,in today’s information explosion,allowing large models to learn to distinguish the importance of information will undoubtedly improve both the performance and execution efficiency of the models.In summary,this paper provides an in-depth exploration of the research method of impulse neural network based on attention mechanism in the field of image recognition,proposes a novel network combining impulse characteristics and self-attention mechanism,and conducts experiments on two datasets,static image data and neuromorphic images,to investigate the representational capability of this network.The specific research of this paper is as follows:(1)An impulse coding scheme for bionic feature extraction and optimisation is proposed.It addresses the problem that conventional Vi T models do not have a specific edge filtering mechanism for feature extraction,but are fed directly into the model by chunking visual information.Inspired by the extraction of image information from the biological retina,a Gaussian differential filter is used to simulate the On-Off ganglion to pre-process the visual information,which can effectively enhance the information input to the model.Also in the area of pulse coding,an intensity micro-perturbation interval coding is proposed,inspired by time-dependent coding schemes.The addition of a small temporal noise perturbation to the moment of pulse issuance derived from pixel intensity conversion can improve the robustness of the model,both of which enhance the performance of the model simultaneously.(2)The Spike Global Mutual Attention model(SGMA)is proposed.This new model is proposed to address the high cost and power consumption of deep learning models,and to address the problem of how to introduce self-attentive mechanisms into impulsive networks.The former provides an energy-efficient and event-driven computational paradigm for deep learning,while the latter is able to capture dependencies between features.As its computation is sparse and avoids multiplication,SGMA is efficient and has low computational energy consumption.It also solves a number of core problems such as how to implement positional embedding in impulsive scenarios.By investigating the computational properties of neurons in impulsive neural networks,different training algorithms are experimented with and the models are further optimized in depth by combining the underlying code of the popular impulsive framework.It is shown that SGMA demonstrates competitive performance in classification on both neuromorphic and static image datasets.
Keywords/Search Tags:Image Recognition, Spiking Neural Network, Self-Attention Mechanism, Spiking Coding
PDF Full Text Request
Related items