Font Size: a A A

Interpretability In Computational Neuroscience Via Deep Representation Learning

Posted on:2023-10-08Degree:MasterType:Thesis
Country:ChinaCandidate:Y C HuangFull Text:PDF
GTID:2558306830487064Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Neural population activity data conduce to understanding brain functions.However,such data are usually abstract spikes trains,which are hard to interpret.It is vital to explain the latent features of observed neural data.It inspires scientists with insights into modeling,analyzing,and predicting neural population activities.Computational neuroscientists have found that there is usually a low-dimensional manifold underlying large-scale neural activities.Learning better latent factors and latent representations informative about neural data helps reveal meaningful information to understand complex neural mechanisms underlying observed behavior and cognition processes.Latent variable models based on deep generative models have discovered informative low-dimensional structures with promising performance and efficiency.Yet,despite their expressiveness,they still have issues with learning obscure latent variable distributions.It eventually becomes difficult to interpret and analyze neural activities.Thus,it remains an open problem to address identifiability and interpretability for the community.From the standpoint of representation learning,the present work introduces new improvements to previous work.Focusing on the problem of interpretability,the major research works of this paper are as follows:1)We proposed a simple yet effective improvement that extracts the informative signal from the noisy neural data in a self-supervised learning manner.We redesigned the neural network architecture and introduced an additional constraint to decompose the latent space into one part relevant to the underlying neural patterns and one part irrelevant.The proposed scheme improved the performance of fitting neural data and latent representations.The representations from different neural patterns are more separable,while ones from the same patterns are more compact.Thus,the procedure improves the baseline to identify different neural patterns.2)We proposed a hyperspherical latent space to model directional latent variables.Moreover,we introduced behavior decoding to improve the identifiability of latent variable distributions without carefully designed constraints.More importantly,the proposed model learns a better task structure informative for interpreting observed neural spike data.It also helps discover hidden features behind unobserved neural activity patterns.We validate the improved models on a public neurophysiology dataset.The experiments show that compared with baselines,our models provide better interpretability for abstract neural spikes activity.
Keywords/Search Tags:neural population spikes, deep generative models, low-dimensonal manifold, self-supervised learning, hyperspherical latent space
PDF Full Text Request
Related items