Hi, I´m considering using Shiny but have some questions about data I/O. Our production environment uses SQL Server mainly.
This confuses me a bit because I come from Power BI. There, the server loads the processed data into the model, so when the model is consumed there is no I/O (no calls to a database to retrieve data, does not matter if it takes 2 hours to refresh, the data it´s already in the model when the user consumes it, so it is fast)
So, back to Shiny, what would be the best practice?
- Should I load the data directly from the SQL Server each time the app launches? (Outside ui and server functions of course).
It is my understanding that when the app "launches" it is "open" in the Shiny server so this code won´t run each time a user consumes the app, it´s already available (similar to Power BI). Is this correct? The code in the app would then have a structure like this:
Table1 <- sqlQuery(conn, "SELECT ... FROM ... WHERE ...") Table2 <- sqlQuery(conn, "SELECT ... FROM ... WHERE ...") # Maybe even process the data directly in the SQL query?? processedTable <- Table1 %>% inner_join(...) %>% mutate(...) %>% unite(...) %>% group_by(...) %>% summarise(...) ui <- ... server <- ... shinyApp(ui = ui, server = server)
- But, if heavy data processing is needed (I don't know, imagine a
%>% complete(...)call that takes 45 minutes to run), would be better to do this outside app.R? For example, having an R script process and download the data into the Shiny Server as a .csv, feather file or whatever and then simply read the processed data directly into the app?
processedTable <- read_feather(...) ui <- ... server <- ... shinyApp(ui = ui, server = server)
- And what about data refreshes? If an R script process the data outside the app it´s trivial, but if the data is loaded inside the app, should the app be relaunched periodically?
I´m sure this is an easy one for you all
Have a nice day.