This dissertation is on the complexity of computation and learning in Neural Networks. One of the reasons for studying neural networks is that they provide a computational model which seems closer to the computational framework employed by the human brain for information processing than the conventional computers. Neural networks are highly simplified model of the human brain and one hopes that by attaining a proper understanding of the computational aspects in neural networks, one can gain significant insights into the computational characteristics of the human brain.;In this dissertation, we have considered three different kinds of neural networks, namely, the Boltzmann Machines, the Hopfield Nets and the Feedforward Neural Nets. We studied the computational and learning abilities of these networks from a complexity theory point of view. We also related the computational power of various neural nets, in particular, the Boltzmann Machines and the Feedforward Neural Nets which are also known as Threshold circuits in the complexity theory literature. |