I have a general/theoretical question pertaining to hyperparameter selection. I'm reading the book "Machine Learning with R, the Tidyverse, and mlr" and the "mlr3 book" at https://mlr3book.mlr-org.com/. In these resources, the hyperparameter tuning step is conducted (at least described) before the cross validation step to obtain the optimal hyperparameters for the final model. However, both sources note that the cross validation step should incorporate hyperparameter tuning within (nested resampling). My question is why can't we just perform the nested resampling to obtain the optimal hyperparameters from this step and eliminate the earlier isolated hyperparameter tuning step? Should these approaches not generally result in similar results? If not, will one consistently outperform the other?