Font Size: a A A

Smoothing Algorithm Of Censored Quantile Regression

Posted on:2016-08-17Degree:MasterType:Thesis
Country:ChinaCandidate:L J ZhangFull Text:PDF
GTID:2180330467996723Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
ABSTRACT:High-dimensional linear regression models have attracted much attention in areas like information technology, biology, chemometrics, genomic, economics, finance, functional magnetic resonance imaging and other scientific fields. The word "high-dimensional" refers to the situation where the number of unknown variables is larger than the number of samples in the underlying data. Obviously, it is almost impossible to tackle such kind of data without additional assumptions. One natural option is to utilize sparsity, which assumes that only a small number of unknown variables influence the response vector. Analysis of high-dimensional data poses many challenges for statisticians and calls for new methods and theories.For a better estimation of the regression coefficient. We need to find a better regression model. Regular regression models like l2estimation focus on the mean value of dependent variable while quantile regression takes advantage of its own conditional quantile. Quantile regression performs well in the high dimensional sparse model, particularly in situation where noises are heavy-tailed or heteroge-neous. Quantile regression becomes a popular and important tool in statistical analysis, which includes the well-known median regression or LAD as a special case. Recently, regularized quantile regression has been widely studied. So, we use it in our article.Regularization as one popular way to analyze high-dimensional sparse data have been used for a long time. Since sparse models are becoming increasingly important in statistics, machine learning and signal processing, there are a wealth of researchers working on sparse estimations via l1orī–¶regularizations. With regularizations, regression models can achieve a better oracle property.Censored quantile regression is a powerful tool to deal with data with censored outcomes for survival analysis. Censored quantile regression provides a useful alternative to the Cox proportional hazards model and accelerated failure time (AFT) model for analyzing survival data. In this paper, we consider the sparse high-dimensional censored quantile regression problem with l1penalty.In this paper, we introduce the basic concept and background of quantile loss function with the censored outcomes in the first three chapters. Then in the fourth chapter, we introduce two smoothing quantile loss functions, including the quantile Huber penalty, to replace the original objective function. So we can make the objective function differentiable. Since the quantile Huber penalty has the same optimal solution with the quantile loss function, we mainly use it in the rest of the paper. with the help of smoothing functions, we can get the first and second order partial derivatives of the objective function. With the help of weighted l1penalty, we use a efficient algorithm called MIRL1to realized variable selection. Astringency of our algorithm and asymptotic properties under normal conditions are also established. The most significant character is that the values of FPR are all as small as zeros and the values of TPR are all almost equal ones (particularly for the Normal distributed case) respectively, which signifies that all significant variables are selected and insignificant ones basically are not chosen. The mean square errors in simulations show that our procedure works well on coefficient estimation, also the oracle proportions.
Keywords/Search Tags:Censored quantile regression, Smoothing function, Weighted l1penalty
PDF Full Text Request
Related items