tidy() in tidymodels does not show correct penalty for glm elastic net

It is great to get the coefficients from glmnet elastic net with the tidy command. It returns the coefficients of the BestModel correctly but the related penalty seems to be incorrect or I misinterpret it. Below is the code that shows the problem:

library(tidymodels)
set.seed(123)
DataTrain <- mtcars[,1:4]
CvFolds=vfold_cv(DataTrain,v=3,repeats=3)

ModelElastNet=linear_reg(mode = "regression",
                                 penalty = tune(), mixture=tune()) %>%
                      set_engine("glmnet")

TuneParameters=parameters(penalty(),mixture())
TuneGrid=grid_regular(TuneParameters,levels = c(10,10))

TuneResults=tune_grid(ModelElastNet,
                          mpg~.,
                          resamples=CvFolds,
                          grid = TuneGrid,
                          metrics   = metric_set(mae))

BestHyperParameters=select_best(TuneResults,"mae")
BestModel=finalize_model(ModelElastNet,
                         BestHyperParameters)%>%
          fit(mpg ~ ., data=DataTrain)

# The command below returns coefficients correctly
# It seems the penalty is not correct. It is rounded to the closest
# element of penalty(range()) in this case 1.
# The correct penalty/lambda is 0.5055
CoefBestModelWrong=tidy(BestModel)

# The command below returns coefficients and penalty correctly 
CoefBestModel=tidy(BestModel$fit) %>% filter(step==max(step)) 

Thanks in advance. Carsten

As noted in ?linear_reg(), parsnip does not fit a specific penalty value (even though the value is finalized) since glmnet does not need a single value. The best value can be found in the model specification attached to the parsnip object. For your example:

> BestModel$spec$args$penalty
[1] 1

That is what is used by predict() and is what tune_grid() selected:

> BestHyperParameters
# A tibble: 1 x 3
  penalty mixture .config               
    <dbl>   <dbl> <chr>                 
1       1   0.222 Preprocessor1_Model030

This is probably the case for some objective function, but for the resampling that you did, a value of 1 was selected as the best.

broom:::tidy.glmnet() doesn't know anything about this and is just given the glmnet object (not realizing that it came form parsnip), you you get the whole set of

Hi Max,

Thank you for your quick reply. Now it makes perfect sense to me. I was aware that glmnet does not take single penalty values and that this is more effective. Your comment "even the value is finalized" made it clear to me. I think I understand that the logic now: A value that is not part of the grid cannot be finalized, which makes sense. Based on this I played around and used:

TuneParameters=parameters(penalty(range(-0.296274,-0.2962750)),mixture())
TuneGrid=grid_regular(TuneParameters,levels = c(10,10))

based on that 10^(-0.2962745)=0.5055. Then the two results for tidy(BestModel) and tidy(BestModel$fit) %>% filter(step==max(step)) are identical, which makes sense because now 0.5055 is part of the parameter grid. Thanks a lot. Not understanding this really bothered me. As a side note: I will give a tidymodel presentation in a week in a colleagues class to show students tidymodels. In my view Caret was already good but tidymodels makes our live so much easier and work more enjoyable. Thank you!

Thanks!

We're trying to improve the documentation for parsnip models and articulating this issue with glmnet is not easy. I'll put these details in the upgraded documentation.

1 Like

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.