Should one pass a json dataframe to plumber API instead of feeding it by rows ? If yes, how to do so ?

I want to expose my model in an API using plumber. I have followed and adapted Shinrin Glander's blog post (How to make your machine learning model available as an API with the plumber package) to use a POST + json data (1 row of my data frame) request.

I am stuck adapting the POST method in the 'plumber.R' to accept a json data.frame with

tibble::tibble(dfr) %>% as.list() %>% jsonlite::toJSON(.) 

I can bypass the issue using purrr::map_df with the row index but I am not sure it is a good practice. The API will be linked to a Shiny app, the request should not be too large. Can someone advise me on this point please ?

Since this summer, plumber dev version on github supports parameter of type list. We used the same approach before.

Setting up a default value for dt should provide you with an example on how to setup your json to call your API. I prefer it this way instead of having to define every parameter in the API.

What we use internally is something like that:

library(plumber)
library(xgboost)
model <- readRDS("model.RDS")
sample_dt <- readRDS("sample_dt.RDS")

#* Predict using model
#* @param dt:list
#* @post /predict
function(dt = sample_dt) {
  predict(model, new_data = dt)
}

Sorry for the late answer and thanks a lot for your help. I should have time to try this tomorrow I hope but this seems promising !

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.