Is there a way to monitor progress in `tune_grid`?

Is there a way to monitor progress in tune_grid? I have 5 tune() parameters (1 in recipe and 4 in model) and set levels to 5 in grid_regular(). With 10-fold cross validation I think this 31250 models to fit, but this might not be the case:

Suppose there are m tuning parameter combinations. tune_grid() may not require all m model/recipe fits across each resample. (source)

I'm fitting mlp() models with the keras engine and it shows me progress in each epoch (and a loss plot that I wish I had turned off), but I don't know the overall progress. Any thoughts about a better way to set this up to monitor?

I am attempting to run in parallel:

all_cores <- parallel::detectCores(logical = FALSE)
  library(doParallel)
  cl <- makePSOCKcluster(all_cores-1)
  registerDoParallel(cl)
2 Likes

Not in parallel.

There is a long-standing (> 10y) issue with some parallel processing technologies where the don't return the logging results.

I've been talking to various people about how to deal with this but nothing yet. I'm hoping that Henrik Bengtsson can work it out with his future package.

One other issue specifically for your issue... we can't parallel process models using the keras engine using the cpu's (as we do almost everything else). If you request it, it won't do it (since it cannot).

I'm working on a torch package that has mlp models. Since that is all Cpp (not python), we will be able to parallelize over multiple cpu's.

I would suggest using tune_bayes() for your model.

3 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.