Font Size: a A A

Learning in Dynamic Synapse Neural Networks

Posted on:2015-09-14Degree:Ph.DType:Thesis
University:University of Southern CaliforniaCandidate:Yousefi, AliFull Text:PDF
GTID:2478390020451711Subject:Neurosciences
Abstract/Summary:
One of the frontier research in neuroscience focuses on replacing damaged human hippocampus regions with neural prosthesis. Neurophysiological and computational analysis of the hippocampus neural circuitry has generated a significant research interest to facilitate its prosthesis development. System level study of the hippocampus demonstrates a cascade neural circuitry with direct and indirect synaptic pathways; meanwhile, the electrophysiological recording of its neural activity identifies both forms of long term - LTP- and short term - STP- synaptic plasticity. In this research, synaptic adaptation - both LTP and STP - is formulated for the artificial model of hippocampus neural circuitry consisting of multiple layers of neurons and thousands of synaptic connections. The novel neural model introduces a family of neuro-computational processing engines applicable in modeling the hippocampus neural functionality and a variety of spatio-temporal processing tasks.;This research starts by introducing an approximate linear state space model simulating temporal dynamics of the synaptic transmission. The proposed linear model simplifies functional analysis of the synapse temporal dynamics; it also accurately replicates both pre and post synaptic mechanisms regulating its vesicle release dynamics. Dynamics of cortical neural circuitries is augmented by a linear state space model; the linear model substantially reduces the computational cost of large scale dynamic synapse neural networks - DSNN- and simplifies its learning process. In the second part of this research, a supervised spike-in-spike-out learning rule for synaptic adaptation of DSNNs is developed; the proposed learning rule is biologically plausible and it is capable of simultaneously adjusting both LTP and STP factors of the network individual synapses. Accuracy, repeatability and scalability of the learning algorithm are established by rigorous simulations in the single layer DSNNs. In the last section of this research, the synaptic adaptation is reformulated to address the learning in multi-layer DSNNs. Topology of the hippocampus neural circuitry is duplicated by a multi-layer DSNN, and time divergence convergence - TDC- concept is proposed to address synaptic adaptation in deeper layers of the network. In TDC learning scheme, synaptic adaptation in deeper layers is achieved by partitioning the network desired task in time, where each interneuron is trained to perform the desired task in a time-slot. Output neurons construct the desired task by time integrating interneurons' accomplished tasks. The TDC learning computational complexity grows linearly with processing time and network size; for which synaptic adaptation in each neuron is a supervised learning task. TDC-based learning methodology is a biologically plausible process, and it is capable of addressing a large class of spatio-temporal processing tasks including spike-domain functional mapping and classification. The new synaptic adaptation mechanism has been successfully applied in replicating neural recordings and benchmark temporal processing tasks; besides, the simulation result confirms capability of a two-layer DSNN model as universal time series computation unit applicable for neural modeling and engineering tasks.
Keywords/Search Tags:Neural, Synaptic adaptation, Model, Hippocampus, Network, Tasks, Time, Synapse
Related items