Plumber API does not re read data into memory on each request

I have deployed a plumber API that runs in a docker container on kubernetes. This API is accompanied by a R script that queries some data from a database and writes out a csv file to '/srv/plumber/report.csv'. A crontab is setup to run this R script nightly to refresh the data in the report. The goal of the API is simply to serve the csv file from '/srv/plumber/report.csv' to anyone who makes a get request to the '/report' endpoint. The cron job appears to be running successfully and writing 'report.csv' - however the when a user accesses the '/report' endpoint, the csv file that is provided seems to not be updated. I validate this by actually deleting the csv file, and then making a get call to the API and I am still getting the file as a response. This leaves me to believe that the data is living in memory from the initial start of the API server and not being updated. Does my setupAPI.R file need to be modified to get this to update on each request?

SetupAPI.R

library(plumber)

df <- read.csv("/srv/plumber/report.csv")

#' Send csv file 
#' @get /report
getDataCsv <- function(req, res) {
  filename <- "report.csv"
  write.csv(df, filename, row.names = FALSE)
  include_file(filename, res, "text/csv")
}

Or do I need to add an entry to the crontab to restart the API server after each time the file updates. If this is the case, how would I do that since it is running in a docker container (NOTE: I do not want to have to restart the pod each day).

Is it as simple as moving the read.csv line inside the route?

library(plumber)

#' Send csv file 
#' @get /report
getDataCsv <- function(req, res) {
  df <- read.csv("/srv/plumber/report.csv")
  filename <- "report.csv"
  write.csv(df, filename, row.names = FALSE)
  include_file(filename, res, "text/csv")
}
1 Like

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.