I am creating an app and have 2 questions pertaining to Shiny efficiency: data storage in terms of object types and data wrangling inside or outside the app.
About the data:
Dataset containing: 12 variables across 50 states spanning 11 years at the county-level. I have an sf object of counties, an sf object with the aforementioned attributes included, and an aspatial dataset containing FIPS codes for linking to sf object (normal data table without any geometry).
About the app:
I am creating a Shiny app that examines various variables across the US and the user can filter by state or view the entire US. The user can select 2 variables at a time to be depicted in a bivariate scatter as well as 2 univariate plots, then the user can select a variable to be visualized in a leaflet map as well. Here is an example of an app very similar to what I am trying to do.
Given that I have aspatial data visualizations (the scatter plots) as well as spatial (leaflet), what would be the most efficient way to store and load the data? Having a data table of the data as well as an sf object with all needed attributes? Or just a data table and an sf object of counties, then link by FIPS code inside the app?
Based on the users' selection, the data will be subset to a specific state. Would it be more efficient to create a dataset for each state in advance to eliminate subsetting in the actual Shiny app (thus this would be 51 datasets, each state + all states)? The answer to this may depend on the answer prior.
Thank you in advance for any insights you are able to share.