Font Size: a A A

On the complexity of computation and learning in neural networks

Posted on:1992-06-11Degree:Ph.DType:Dissertation
University:University of Illinois at ChicagoCandidate:Gupta, AjayFull Text:PDF
GTID:1478390017950311Subject:Computer Science
Abstract/Summary:
This dissertation is on the complexity of computation and learning in Neural Networks. One of the reasons for studying neural networks is that they provide a computational model which seems closer to the computational framework employed by the human brain for information processing than the conventional computers. Neural networks are highly simplified model of the human brain and one hopes that by attaining a proper understanding of the computational aspects in neural networks, one can gain significant insights into the computational characteristics of the human brain.;In this dissertation, we have considered three different kinds of neural networks, namely, the Boltzmann Machines, the Hopfield Nets and the Feedforward Neural Nets. We studied the computational and learning abilities of these networks from a complexity theory point of view. We also related the computational power of various neural nets, in particular, the Boltzmann Machines and the Feedforward Neural Nets which are also known as Threshold circuits in the complexity theory literature.
Keywords/Search Tags:Neural, Complexity, Computation and learning, Boltzmann machines
Related items