Font Size: a A A

Autonomous evolution of sensory and actuator driver layers through environmental constraints

Posted on:2003-04-18Degree:Ph.DType:Dissertation
University:University of FloridaCandidate:Choi, TaeHoon AnthonyFull Text:PDF
GTID:1468390011989685Subject:Engineering
Abstract/Summary:
Although future applications for autonomous mobile robots (AMR) are practically limitless, researchers must first address some of the hurdles blocking wide acceptance of AMR as a viable solution. This research addresses some of these issues through the realization of Innate Learning (IL), Environmental Reinforcement Learning (ERL), and Autonomous Evolution of sensory and actuator Driver layers through Environmental Constraints (AEDEC). Innate Learning (IL) is a learning mechanism that takes advantage of innate knowledge to improve and enhance learning. Through the use of innate knowledge of its embodiment and its environment, IL provides a simple mechanism to autonomously detect and correct discrete production errors (i.e., errors in wiring of sensors and actuators). Environmental Reinforcement Learning (ERL) is a real-time learning architecture for refining primitive behaviors through interaction within highly structured environments. ERL architecture allows self-calibration of sensors, actuators, and primitive behaviors by using a structured environment (i.e., an obstacle course) to provide real-time feedback on a robot's performance. Through the refinement process, lower cost parts can be used and damaged parts can be replaced without affecting the rest of the system. Finally, AEDEC learning architecture is the culmination of the previous research, namely Innate Learning (IL) and Environmental Reinforcement Learning (ERL). By incorporating IL and ERL, AEDEC asserts that sensory and actuator driver layers can be autonomously programmed from a simple set of innate knowledge guided by static constraints from a highly structured environment. Through the use of innate knowledge and a highly (drivers) of sensory information and actuation controls, consequently reducing the work load of a human programmer. Since different type of robots (walking, two wheels, caterpillar treads, etc.) can be trained in the same environment, AEDEC permits code (high level behavior) portability between different types of robots.
Keywords/Search Tags:Actuator driver layers, Environment, Autonomous, AEDEC, Robots, ERL, Innate knowledge
Related items