Future of caret?


#1

Hey - looking at github.com/rstudio and seeing a ton of interesting work around keras and tensorflow / tfestimators and am wondering if there is some potential to see a tidy-esque approach to a GPU modeling framework involving recipes and some version of caret that offloads some of the lower-level programming required for unique modeling techniques that may not be predefined within keras or tensorflow already.

That would be solid


#2

I don’t know much about GPU modeling :flushed:, but Max Kuhn’s bookdown guide, The caret Package, was just updated on September 4th (as of my typing this), so it could be a good place to take a look.


#3

The fantastic greta package also does a good job of keeping things R-esque.


#4

Yes! And its author, Nick Golding, has done a great job with the documentation, too:
https://goldingn.github.io/greta/
Here’s the GH :link: for preview purposes:


#5

I’ll have to check greta out. At first, I had it confused with gretl and I had flashbacks to darker times haha.


#6

Thank you! I’m also secretly hoping that Max Kuhn sees this post and opens up some office hours himself :smiley:


#7

He’s also super-helpful.


#8

if there is some potential to see a tidy-esque approach to a GPU modeling framework involving recipes and some version of caret that offloads some of the lower-level programming required for unique modeling techniques that may not be predefined within keras or tensorflow already

That hadn’t crossed my mind. There is/will be connections between caret/recipes and tensorflow packages. The last release of caret contains two neural net models build on keras and I’m playing with adding autoencoders to recipes.

One issue is that “not be predefined within keras or tensorflow already” means a lot of close-to-the-metal work in tensorflow and right now I would avoid that since 1) I don’t know the intricacies of that system and 2) the api might change a lot.

Also, it’s my belief that the gpu is optimized and fast for gpu-things (like matrix calculations) and doesn’t help that much otherwise. For example, I don’t know what that would offer for something like trees etc.

I think that a more likely integration would be to have some recipes steps off-loaded to tensorflow. The autoencoder is a good prototype for that. I’d like to have complete backends for recipes (as in dplyr) so that you can use remote data in another system and use recipes to tell the system, what to do.


#9

Yes, I will get on that and schedule some.


#10

This is super exciting stuff! I am a huuuge caret fanboy and have been translating a lot of our pre-processing work over to recipes as well, so anything that promotes that framework is fantastic in my book.

I hadn’t thought of the algo’s that are outside the optimal scope of GPU processing. Also, from a user perspective, we tend to offload those types of calculations to parallelization across CPU cores. Not sure if that is useful info or not, but from my perspective a combo of CPU and GPU processing is super strong.