Working with big preuploaded files within Shiny

Hello,

Currently I have an application that has roughly 2GB of data as part of the deployment. I am on xxxlarge instances so I have the 8GB instance size. The app is able to read in all the data in the background and this is no problem. The moment the user makes some selections and the app needs to process this I sometimes run out of memory and other times the app just times out without a definite error that I can see in the logs.

During the app as well a partition of the above data is used within some pivot_longer calls to get the data in the correct form for the user. I suspect this operation might be using a lot of memory. Are there ways to circumvent this?

Ideally I would like some advice on how best to manage this? Is this a case of having to change settings on the Advanced panel on https://www.shinyapps.io/ to get the worker settings in a different configuration? Or this a case where I need some sort of database connection and perform a lot of those operations there and only import those outputs directly?

You might want to check library(data.table) for fast and memory efficient data wrangling. data.tables counterpart for pivot_longer is melt.data.table.

However, populating a database with your data and querying only those datasets you currently need is also a good approach.

Thanks for the reply. Do you have any references directly to big data comparisons for this?

Is there a simple way for me to create the melted data table on the database side without having it active within my session?

Here you can find a benchmark regarding data.table.

Concerning the reshape operations for a database it's hard to help without any example data or knowing which DB management system you are planning to use. Here is an example for PostgreSQL.