I think you need add, or at least give, the shiny
user access to Hadoop: https://spark.rstudio.com/articles/deployment-amazon-emr.html#create-a-user
I think I fixed the permission issue; however, now it says it takes too long to load. I changed the config file to allow for 300 seconds, but that doesn't seem to make a difference. When I check the logs, it just says su: ignore --preserve-environment, it's mutually exclusive to --login.
@edgararuiz
spark is not a good option to interactive query like mysql, because in-memory computation schema is designed for fast failed and retry.
In that, I recommend you push your data to mysql/elasticsearch/redis will be a better option for your customer.