| Currently,artificial intelligence(AI)theories and technologies are developing rapidly,and industry-enabling applications are booming.However,the ensuing problem of energy consumption has been widely discussed,and low-power AI has become an important research direction.Spiking neural networks(SNNs),as one of the cutting-edge directions in brain-inspired computing research,are promising to solve this challenge.SNNs mimic the sparse,event-driven spiking communication patterns of biological neural systems and show significant energy efficiency advantages.By implementing hardware such as neuromorphic chips,SNNs are expected to realize extremely low-power AI and promote the development of green and low-carbon technologies.Despite the inherent energy advantages of SNNs,the unique spike coding mechanism and complex neurodynamic constraints also bring challenges to their application to deep learning.The mechanism is still unclear as to how spiking information is used in deep SNNs to characterize and understand complex,high-dimensional data.This intrinsic low interpretability also limits the efficiency of SNNs in utilizing samples and their generalization ability,increases the training cost,and prevents SNNs from taking full advantage of their energy efficiency.With the increasing diversity of practical application scenarios,the low data utilization and insufficient generalization ability of deep SNN have become the core challenges that need to be solved.To address these problems,this dissertation systematically investigates the deep learning method of SNNs from four aspects: intrinsic feature representation,sample efficiency optimization,knowledge transfer,and cross-data type learning.The details are as follows:(1)This dissertation proposes a spiking feature space characterization method based on the Poisson distribution to address the limitations of SNN deep feature spaces.The traditional spiking variational autoencoder(VAE)model uses network layers to construct the deep feature space,which is not only difficult to explicitly define the spatial distribution,but also requires the training of many redundant parameters.Therefore,based on the analytical conclusion that spiking features are time-robust to input information reduction,this dissertation uses Poisson distribution to explicitly construct the deep spiking feature space without additional training parameters.Meanwhile,a reparameterizable spiking variable sampling method is proposed to provide diversity generation capability for the model.The experimental results show that the proposed method reduces the number of training parameters by 87.9%,improves the information reduction effect by 38.7% at maximum,and provides stronger noise robustness and a richer ability to generate new samples.(2)This dissertation proposes a bio-inspired active learning(BAL)method for deep learning in SNNs.The method selects samples with high training values,thereby improving sample utilization efficiency.While active learning selects the most informative samples,existing methods are based on traditional artificial neural networks(ANNs)and do not fully apply to SNNs.To address this problem,this dissertation uses the internal states of spiking neurons to define two neuron behavioral patterns based on firing rate and membrane potential.Then the generalized behavioral patterns of neurons in unlabeled samples are compared with the empirical behavioral patterns in the labeled samples.Finally,the uncertainty of the samples is calculated by comparing results as the basis of selection.The experimental results show that under the same sample size,the average accuracy of BAL is maximally improved by 6.3% compared to the traditional method,and accurate selection can also be performed when the samples are noisy.(3)This dissertation proposes a spiking neural network transfer learning(SNNTL)framework to address the substantial labeled sample dependency of SNN training in the new data domain.Transfer learning assists model training on the target domain with the knowledge of labeled samples in similar data domains,but the binary and sparse spiking features make the traditional transfer loss difficult to converge in SNNs.To address this problem,this dissertation uses the firing rate to calculate the transfer loss to improve its information density.Then,a novel transfer loss is proposed based on the spiking distribution correlation to reduce the training difficulty.Meanwhile,the back-propagation rule for the transfer loss and the classification loss is derived.The experimental results show that the accuracy of SNNTL on the target domain is maximally improved by 29.9%,and all layers of the SNNTL model are transferable.(4)For the high cost of deep SNN learning on novel event data,this dissertation proposes a spiking transfer learning method named R2ETL(RGB to Event Transfer Learning)to reduce the difficulty of model training.Event cameras record visual information in the form of discrete events,which have the advantages of low power consumption and high temporal resolution.However,event data lacks sufficient detailed visual information such as color and texture,which makes model training difficult and costly.To address this problem,this dissertation proposes to utilize traditional static image knowledge to assist model training on event data.A coding alignment module without temporal constraints and a feature alignment module with spatiotemporal distribution constraints are designed in R2 ETL to carry out knowledge transfer hierarchically.The experimental results show that R2 ETL achieves a maximum accuracy improvement of 5.74% over the existing stateof-the-art methods,and is robust to both changes in the event camera environment.In summary,this dissertation significantly improves the sample efficiency and generalization ability of SNNs by investigating innovative deep learning methods,including improving the spiking feature representation,increasing the efficiency of sample utilization,achieving effective transfer learning,and reducing the learning cost of event data.These studies have laid a certain foundation for SNNs to further optimize the energy-efficiency ratio,expand application scenarios and adapt to diverse data types.Future research will continue to deepen the theoretical model and algorithm optimization of SNNs,and explore techniques such as multimodal fusion,adaptive learning strategies,and hardware acceleration to expand their applicability in complex task processing and novel application areas. |