Adjusting posterior model estimated probabilities after re-balancing or applying case weight

That has been my experience with any type of subsampling for class imbalances; the default cutoff is more reasonable and the AUC is often equivalent to the original model.

No, I wouldn't say that. There is an effect of the sampling methods of the model; it isn't just the same model with different weights/coefficients (see footnote).

There is a positive effect on the calibration of the model in the sense that the classification (i.e. posterior) probabilities do not have such pathological distributions.
I tried to demonstrate this on slide 14 of the classification notes found here.

Yes to the first question (and yes is a good thing). Without rebalancing, it wouldn't be uncommon to have your mostly likely event to have a class probability less than 60% (or lower).

I wouldn't readjust the class probabilities further (since that is what the subsampling is accomplishing). If you don't subsample, you could use another recalibration method (using Bayes theorem or monotonic regression); I tend to use subsampling to solve this issue though.

[footnote] The exception to this is ordinary logistic regression. The class imbalance drives the intercept to an extreme and rebalancing the data only effects the intercept parameter (and all of the standard errors).