Within regression modeling of high-dimensional data, limiting model complexity often comes into the notice to achieve more parsimonous and persuasive models. Various penalized regression methods have been proposed to serve for this purpose in the literature. These methods are typically used to avoid overfitting and to select the variables needed to predict an outcome variable. OLS regression minimizes the sum of squared residuals to find the estimates of regression model coefficients, whereas penalized regression adds a penalty term to this minimization problem, with the goal of shrinking small coefficients towards zero. Recently, the use of Bayesian penalized regression models has become popular because the prior distribution acts similarly as the penalty term in classical penalized regression. For example, the assumption of laplace prior for regression coefficients in Bayesian analysis leads to lasso estimates, also known as Bayesian lasso. Such so-called shrinkage priors also reduce the small effects to zero while preserving the true large effects. Compared to classical penalized methods, Bayesian penalized methods have several advantages such as automatic estimation of the penalty parameter, natural penalization through the prior distribution and flexibility in terms of the type of penalties. The aim of this study is to demonstrate some advantages of Bayesian penalized regression modeling with different shrinkage priors. Therefore, we consider a theoretical comparison of four different shrinkage priors, each parameterized as a scale mixture of the Normal distribution. We compare their performance in terms of prediction of an outcome variable that is the subject of a social science application.
Anahtar Kelimeler: Bayesian penalized regression, Global-Local shrinkage, Shrinkage priors, Penalized regression