I have a statistical, not a programming, question. I am asking this here as I know that the rstanarm crew follows this site, and I am hoping that you all will not mind that I am asking a statistical, not a programming question.

Specifically, one is cautioned when using frequentist methods for hierarchical linear models that non-normality of the dependent variable (actually, of the residuals of the model) can lead to important errors in estimates of both coefficients and their confidence intervals. One can largely eliminate problems with non-normality by using dependent variable transformations such as the Box-Cox transform. But this is at the cost of leaving you with a non-linear model on the untransformed dependent variable, a real loss in ease of explanation and application.

My question is, "How important is non-normality of the residuals when one is using Bayesian methods (such as stan_lmer) to fit a hierarchical linear model?" I raise this question because, when using MCMC to fit a model, one uses the actual draws to calculate medians and credible intervals, rather than calculations dependent on the assumption of normality. I suspect that these will be accurate even when the residuals are modestly to moderately non-normal. On the other hand, one starts out with a prior assumption of normality of the residuals.

So, Bayesian experts, what's the answer to my question? Is dependent variable normality less important with a Bayesian analysis?

Thanks to the rest of you for putting up with a core statistics theory question.

Larry Hunsicker