Font Size: a A A

AI Neuroscience: Visualizing and Understanding Deep Neural Network

Posted on:2018-09-23Degree:Ph.DType:Dissertation
University:University of WyomingCandidate:Nguyen, Anh MFull Text:PDF
GTID:1478390020957632Subject:Computer Science
Abstract/Summary:
Deep Learning, a type of Artificial Intelligence, is transforming many industries including transportation, health care and mobile computing. The main actors behind deep learning are deep neural networks (DNNs). These artificial brains have demonstrated impressive performance on many challenging tasks such as synthesizing and recognizing speech, driving cars, and even detecting cancer from medical scans. Given their excellent performance and widespread applications in everyday life, it is important to understand: (1) how DNNs function internally; (2) why they perform so well; and (3) when they fail. Answering these questions would allow end-users (e.g. medical doctors harnessing deep learning to assist them in diagnosis) to gain deeper insights into how these models behave, and therefore more confidence in utilizing the technology in important real-world applications.;Artificial neural networks traditionally had been treated as black boxes---little was known about how they arrive at a decision when an input is present. Similarly, in neuroscience, understanding how biological brains work has also been a long-standing quest. Neuroscientists have discovered neurons in human brains that selectively fire in response to specific, abstract concepts such as Halle Berry or Bill Clinton, informing the discussion of whether learned neural codes are local or distributed. These neurons were identified by finding the preferred stimuli (here, images) that highly excite a specific neuron, which was accomplished by showing subjects many different images while recording a target neuron's activation.;Inspired by such neuroscience techniques, my Ph.D. study produced a series of visualization methods that synthesize the preferred stimuli for each neuron in DNNs to shed more light into (1) the weaknesses of DNNs, which raise serious concerns about their widespread deployment in critical sectors of our economy and society; and (2) how DNNs function internally. Some of the notable findings are summarized as follows. First, DNNs are easily fooled in that it is possible to produce images that are visually unrecognizable to humans, but that state-of-the-art DNNs classify as familiar objects with near certainty confidence (i.e. labeling white-noise images as "school bus"). These images can be optimized to fool the DNN regardless of whether we treat the network as a white- or black-box (i.e. we have access to the network parameters or not). These results shed more light into the inner workings of DNNs and also question the security and reliability of deep learning applications. Second, our visualization methods reveal that DNNs can automatically learn a hierarchy of increasingly abstract features from the input space that are useful to solve a given task. In addition, we also found that neurons in DNNs are often multifaceted in that a single neuron fires for a variety of different input patterns (i.e. it is invariant to changes in the input). These observations align with the common wisdom previously established for both human visual cortex and DNNs. Lastly, many machine learning hobbyists and scientists have successfully applied our methods to visualize their own DNNs or even generate high-quality art images. We also turn the visualization frameworks into (1) an art generator algorithm, and (2) a state-of-the-art image generative model, making contributions to the fields of evolutionary computation and generative modeling, respectively.
Keywords/Search Tags:Deep learning, Dnns, Neural, Neuroscience
Related items