Font Size: a A A

Silicon primitives for machine learning

Posted on:2004-02-03Degree:Ph.DType:Thesis
University:University of WashingtonCandidate:Hsu, DavidFull Text:PDF
GTID:2468390011971417Subject:Computer Science
Abstract/Summary:
I am a machine-learning researcher and my programming language is VLSI. Machine-learning research concerns the development of algorithms and systems that extract useful information from data. I am interested in enabling learning in resource-limited embedded devices and systems such as: (1) adaptive signal processing in wireless communication systems, (2) cellphones that can track a speaker's voice in noisy environments, (3) sensors that self-calibrate and filter redundant information in their inputs, (4) imagers that decompose images into high-level feature maps like shading and reflectance, (5) sensor networks that learn efficient low-bandwidth communication codes, and (6) Micro-mechanical robots that learn to integrate sensory information with motor control. All of these applications can benefit from machine learning. Unfortunately, the majority of learning algorithms today are developed for desktop computers where preprocessed information is stored in large databases, the problem domain is static or slowly changing, and power and CPU cycles are plentiful. In contrast, learning systems for embedded applications must process information and learning in real-time, on raw data coming from highly dynamic environments, under limited power and computational budgets.; This thesis describes a technology for statistical machine learning based on silicon device physics. I show that the equations that describe the flow of currents in a simple silicon device, the synapse transistor, can also describe machine-learning rules. The foundation of these silicon learning rules are two synapse-transistor based learning primitives: the spike modulated memory cell, and the automaximizing bump circuit. The two circuits are compact implementations of general principles common to a wide variety of learning rules. These learning primitives can form the basis of VLSI learning systems that are orders of magnitude smaller and more power-efficient than equivalent custom digital systems or microprocessors. Simulated results of learning rules based on the spike-modulated memory cell, and experimental results from a fabricated VLSI learning system based on the automaximizing bump circuit both show that these learning primitives also achieve generalization performance comparable to their software counterparts. Consequently, these primitives are an enabling technology for applications of learning in systems where power and compute cycles are at a premium.
Keywords/Search Tags:Primitives, Systems, Machine, Silicon, VLSI
Related items