Font Size: a A A

Sparse Robust Recovery and Learning

Posted on:2016-09-10Degree:Ph.DType:Thesis
University:Harvard UniversityCandidate:Gwon, Youngjune LeeFull Text:PDF
GTID:2478390017976797Subject:Computer Science
Abstract/Summary:
Sparse linear models pose dual views toward data that are embodied in compressive sensing and sparse coding. Despite mathematical equivalence, compressive sensing and sparse coding are two different classes of application for sparse linear models. Compressive sensing draws recoverable, low-dimensional compressed representations from a blind linear projection on data. Sparse coding enables the discovery of structural patterns underlying data in forced decomposition with a given dictionary of basis vectors. Sparsity is the common constraint that makes exact recovery possible for compressive sensing and allows forced decomposition to unveil meaningful features for sparse coding. In this thesis, I build on compressive sensing and sparse coding to explore the problems for reconstructive and discriminative applications in sensing, wireless networking, and machine learning. Specifically, I aim to develop recovery and feature learning methods robust to complex data transformations and alterations. With a wideband spectrum sensing application for cognitive radios, I empirically demonstrate the resiliency of the proposed sparse recovery technique to linear and nonlinear distortions present in the mix of heavily subsampled RF measurements. I push beyond best-known efficiency for distributed compressive sensing and show feasibility of scaling the spectrum sensing application with constant communications cost. I also focus on learning sparse feature representations for discriminative machine learning tasks. I build a classification pipeline based on both single-layer and multilayer sparse coding trained on various modalities of data including text, image, and time series. To take advantage of possible higher-level construct of features in data, I propose a deep architecture on multilayer sparse coding, namely Deep Sparse-coded Network (DSN). When I train DSN with layer-by-layer dictionary learning followed by the proposed DSN backpropagation algorithm for image and time-series classification, it leads to performance better than deep stacked autoencoder neural network. In addition, I present Nearest Neighbor Sparse Coding (NNSC), an enhancement for sparse coding by imposing the nearest neighbor constraint in the sparse feature domain. Despite inferior reconstructive error, NNSC improves the classification performance of classical sparse coding.
Keywords/Search Tags:Sparse, Compressive sensing, Data, Recovery, Linear
Related items