Font Size: a A A

Neural representations for generalization in reinforcement learning

Posted on:2014-03-07Degree:Ph.DType:Thesis
University:New York UniversityCandidate:Gustafson, Nicholas JFull Text:PDF
GTID:2458390008462381Subject:Neurosciences
Abstract/Summary:
Reinforcement learning provides an elegant computational framework for describing how the brain learns, by trial and error, to choose actions that predict reward. This is a problem faced by all neural systems. Additionally, for efficient learning, the brain must be able to generalize from prior experiences to future situations. While much is known about how the brain represents value under simple conditions requiring no generalization, we know little about how it might solve the problems that require generalization for efficient learning. The first project in this thesis investigates the extent to which neural spatial representations can support reinforcement learning. To enable learning, the representations should reflect the structure of value in the environment. Further, if the goal is efficient generalization between different locations in the environment, then this has implications for how they measure distances and treat things such as obstacles. Our results demonstrate that for efficient learning, neural basis functions should use geodesic distance defined with reference to the underlying state (location) transition matrix instead of Euclidean distance. In addition, the model we developed generates experimentally testable hypotheses regarding the physiological properties of grid cells and place cells. Lastly, our model can reproduce previously shown physiological phenomena from the place cell and grid cell literature. The second project in this thesis uses a hybrid perceptual decision making and three-armed bandit task to investigate the computational and neural mechanisms for generalization under state uncertainty, where state refers to an environment or situation an animal is in. We first demonstrate qualitatively that choices at the gross level suggest reinforcement learning systems have access to state uncertainty. We then developed a finer scale behavioral model that suggests they use state uncertainty in trial by trial choices. Specifically, our neuroimaging results show belief-state uncertainty activity in the ventral striatum that co-localizes with reward prediction error at outcome. We also show that chosen value activations in the intraparietal sulcus are best explained by a belief-state that includes uncertainty. Overall, these results suggest that the neural areas representing state uncertainty pass a correlate of state probabilities along to downstream cortical regions responsible for learning.
Keywords/Search Tags:Neural, Reinforcement, State uncertainty, Generalization, Representations
Related items