Font Size: a A A

Preparing smart environments for life in the wild: Feature-space and multi-view heterogeneous transfer learning

Posted on:2015-09-24Degree:Ph.DType:Dissertation
University:Washington State UniversityCandidate:Feuz, Kyle DillonFull Text:PDF
GTID:1478390017498326Subject:Computer Science
Abstract/Summary:
With the ever-increasing abundance of sensing and computing devices embedded into our environments we have the opportunity to create personalized activity recognition ecosystems. Two key challenges must first be overcome, the new environment problem and the new sensing platform problem. The new environment problem is encountered every time a sensing platform is deployed to a new environment. The new sensing platform problem is encountered every time a new sensing platform is deployed into an environment with an existing sensing platform. We approach these problems as transfer learning problems with heterogeneous feature-spaces, referred to as heterogeneous transfer learning. We propose several novel algorithms for each setting. Additionally, some theoretical work on the accuracy bounds and the run-time of the algorithms is also presented.;Feature-Space Remapping (FSR) is proposed as a novel class of heterogeneous transfer learning algorithms which can be applied to the new environment problem. These algorithms are the first to perform heterogeneous transfer learning without requiring explicit linkage data. We show how these algorithms are able to outperform learning a model generalized across different environments using relations between features as specified by a domain expert. We also show how FSR can be used in conjunction with ensemble learning to combine information from multiple datasets. This method outperforms the state-of-the-art by 10% to 20%.;Multi-view Transfer Learning is proposed as a solution to the new sensing platform problem. In multi-view transfer learning the same instance can be seen from multiple views or feature-spaces which facilitates transferring knowledge from one view to another. We develop several new multi-view learning algorithms for this problem. Using a well-trained view as a teacher, we show that the performance of new sensing platforms can be increased by as much as 20% through multi-view learning. The teacher can also be used to bootstrap a set of labeled training data for the new sensing platform which removes the need to manually annotate data when introducing new sensing platforms. We also provide bounds and an estimation of the learner's accuracy when the ground truth labeled data cannot be used to directly estimate the accuracy.
Keywords/Search Tags:Transfer learning, Environment, Sensing, Multi-view, Data
Related items