Font Size: a A A

Automatic step-size adaptation in incremental supervised learning

Posted on:2011-08-15Degree:M.SType:Thesis
University:University of Alberta (Canada)Candidate:Mahmood, AshiqueFull Text:PDF
GTID:2448390002970005Subject:Computer Science
Abstract/Summary:
Performance and stability of many iterative algorithms such as stochastic gradient descent largely depend on a fixed and scalar step-size parameter. Use of a fixed and scalar step-size value may lead to limited performance in many problems. We study several existing step-size adaptation algorithms in nonstationary, supervised learning problems using simulated and real-world data. We discover that effectiveness of the existing step-size adaptation algorithms requires tuning of a meta parameter across problems. We introduce a new algorithm---Autostep---by combining several new techniques with an existing algorithm, and demonstrate that it can effectively adapt a vector step-size parameter on all of our training and test problems without tuning its meta parameter across them. Autostep is the first step-size adaptation algorithm that can be used in widely different problems with the same setting of all of its parameters.
Keywords/Search Tags:Step-size adaptation, Supervised learning, Fixed and scalar, Meta parameter across
Related items