Tunelength vs tunegrid in Caret

When using Caret to tune ML algorithms you can choose between tuneGrid and tuneLength in the train() function. Most of the time I see people using tuneGrid while I can't see the reason not to use tuneLength. Is there any reason why so many people choose to use tuneGrid instead of tuneLength?

I suppose its like choosing to drive manual rather than automatic to try an analogy; as with Grid you specify explicitly the params it should try ? ( I have very limited experience with the Caret framework)

What I don't understand is why would you explicitly try out certain set of parameters? No matter how well you understand your data or the research topic, you can't have any prior knowledge of the best parameters. If I'm using random forest, why should I, for example, explicitly try mtry = seq(1, 10) instead of using tuneLength = 10.

People (like me) have some grids that they consistently use for certain models. Also, you might want to fix some parameters or override what caret is doing. People have different feelings on this, so we give them multiple options.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.