I personally think that the tidyverse ecosystem really makes a huge difference to my daily work. It is so easy to use and has a complete set of tools suitable for most of my tasks. What's better, dplyr package can easily work with database like Postgresql and Spark, which partially solve the memory constraint issue of R. However, tidyverse is not that great when it comes to performance. To my own experience, when I use dplyr with broom, the performance gets even worse.
I started R with learning data.table, but since I came across tidyverse, I don't use data.table that frequently. I turn to data.table when dplyr is too slow.
I was wondering if there is any work being done to integrate the strength of dplyr and data.table to solve the memory constraint and performance issues in a unified framework.
dtplyr is a data.table backend for dplyr: https://github.com/hadley/dtplyr - be sure and read the readme on dtplyr. dtplyr makes more copies than you would if you were working with data.table directly.
Rather than "use data.table when making shiny apps" I think better advice would be: "data.table is one option to try when you discover you need more speed from your R script"
Many shiny apps have no performance constraints due to data size and using data.table in those situations would be of no benefit.