in Max's your APM book he propose a couple of solutions to tackle severe class imbalance such as: re-balancing the modelling dataset or using case weights etc. After reading this chapter and applying some of the solutions to a couple live examples I came across a couple of questions:
-
When working with the credit_data dataset from the recipes package I checked whether applying up-sampling (even though imbalance is not really severe) would result in a better model. Of course the recall-precision trade-off was much better in the up-sampled model because of a higher recall and that completely makes sense when the cost of approving bad credit is so much higher than rejection a good one, but I was very surprised to see so little effect on the AUC - the difference was almost negligible. In this case AUC is a measure or rank-ordering which is essentially what credit scoring is all about. Does that eventually mean that in credit/ fraud scoring applications up-sampling would not necessarily be so required because the relative ordering of good/ bad predictions is preserved even without up-sampling as the AUC suggestes?
-
Regardless of the answer to question 1), let's assume that up-sampling/ case weight was used which eventually distorts the posterior estimated probability of a model. For example, if up-sampling was applied the average default rate of the original set was 5% (understood as count_bad / n()), the estimated default rate (understood as the average estimated default probability) for the training set would be much higher e.g. 15%. That also means that any future estimate of the model would not reflect the true, imbalance nature of the original training set, correct? In this case, is there a 'statistically' valid way of adjusting estimated posterior probabilites of future predictions?
Thanks!