Essays on semiparametric estimation and testing | | Posted on:2005-10-09 | Degree:Ph.D | Type:Dissertation | | University:University of Rochester | Candidate:Huh, In | Full Text:PDF | | GTID:1450390008978670 | Subject:Economics | | Abstract/Summary: | PDF Full Text Request | | An introductory overview of the dissertation appears in Chapter 1. Chapter 2 introduces an estimator for the binary choice model, using the fact that if the conditional expectations are close, then the index should be close. The estimation does not assume the distribution of error to be a specific function, but it assumes that the index is a linear combination of regressors with finite dimensional parameters of interest. We use an order statistic of the estimated conditional expectations, given that the adjacent data have similar values of conditional expectations. This estimator is easy to compute. The first-step estimation of the conditional expectation has a close-form solution, if we use a kernel estimation. The second step is a least-square fit, which is also a closed-form solution. While the first-step estimation converges slower than a parametric rate, the estimator for the parameter of interest is root-n-consistent.; Chapter 3 considers the estimation of the transformation model. Even though there are many semiparametric estimators for transformation models, most of them do not allow for heteroskedasticity. The conditional expectation function is sensitive to heteroskedasticity, but the conditional quantile is not affected by it under a proper quantile assumption. We can estimate the conditional quantiles without a parametric assumption about the quantile function. We use the fact that if the quantiles are close to each other, then the indexes should be close also. We use the same procedure as in the previous chapter. First, we order the data using the estimated conditional quantiles and use the adjacent pairs of data to estimate the index parameters. This estimator is root-n-consistent and allows for heteroskedasticity. Although the first-step estimation requires numerical optimization, the second step is a least-square fit which has a closed-form solution.; Chapter 4 focused on the selection correction model. If there is selectivity bias in the data, the model has a non-linear component to it. The selection correction component of the model is often specified by the distributional assumptions of residuals and a selection equation. One method dealing with the selection correction model is the two-step, kernel-weighted, least-square estimation. It estimates the propensity scores in the first step and assigns the kernel weights on the pairwise differences according to the estimated propensity scores. We simplify the second step using the order statistics of the estimated propensity scores and use the differences of adjacent observations to estimate the parameter in the linear section of the regression function. We eliminate the second step's kernel function and bandwidth choices by using an order statistic of the estimated propensity scores.; The final chapter introduces the nonparametric test of equality of regression functions. The regression functions are unspecified. This test is a nonparametric version of the Chow test, so it is a poolability test of two data sets. The sample sizes are arguably more important in the nonparametric estimation, because it converges slower than the parametric one. The test uses the distance between the pooled regression and the separate ones. The speed of this test statistics differs from the test of significance of a continuous random variable. | | Keywords/Search Tags: | Test, Estimation, Chapter, Model, Estimated propensity scores, Parametric, Conditional, Estimator | PDF Full Text Request | Related items |
| |
|