Error loading package library when deploying app that uses multicore processing

Using rsconnect I have deployed an app to shinyapps.io, however, the app fails to start with my log file showing an error is occurring when loading required packages.

R Version:  R version 4.1.1 (2021-08-10) 
snowfall 1.84-6.1 initialized (using snow 0.4-4): parallel execution on 12 CPUs.
Library plyr loaded.
Library plyr loaded in cluster.
Error in value[[3L]](cond) : 
       Stop: error loading library on slave(s): stars
Calls: local ... tryCatch -> tryCatchList -> tryCatchOne -> <Anonymous>
Shiny application exiting ...
Execution halted

The app is quite large, and so I have tried to incorporate multicore processing, using the package snowfall. Upon start up this package selects a number of available CPUs (referred to as slaves) over which too 'spread' the workload. The package then loads all other required packages across this cluster of slave CPU's. My log file shows that upon starting up my app, snowfall selects 12 CPUs and then loads plyr across this cluster. But when snowfall moves onto the next required package (in this case stars) it fails to load the package across the CPU cluster thus causing the application to crash on load up. This app works perfectly on my local machine, and the error persists irrespective of what order the required packages are loaded to the cluster i.e., it is always when trying to load the second package that the error occurs, not just when trying to load stars.

I don't suppose anyone has encountered a similar error when using multicore processing with shinyapps.io, or has any advice on where to start troubleshooting? Thanks!

Hi,

Welcome to the RStudio community!

I do not know exactly how shinyapps.io manages its internal load distribution, but I would not be surprised if parallel processing on this free service would not be possible, as they try to run as many apps as possible from different users, and thus distribute the CPU power between users.

Also, there are memory limits, and when you go into parallel processing, the memory typically goes up a lot as well, which is another reason this might not work. I suggest you look into ShinyServer pro or RStudio Connect which might offer a solution here.

If you want to stay on the free ShinyApps.io, I think you should get rid of the parallel part of your code for now and try to optimise another way :slight_smile:

Good luck,
PJ

Thanks for your response PJ, very much appreciated!

Following up on your advice I was able to come to some form of resolution on the issue, and so thought I would post my solution here should others encounter a similar problem.

Although I remain unfamiliar with the internal workings that allows shinyapps.io to host multiple apps across many users, my problem does appear to have arisen due to the way shinyapps.io manages its internal load distribution - but not necessarily because I was attempting multicore processing. Originally, I used the package snowfall to spread my workload across multiple CPU's. Therefore, upon app startup, snowfall was attempting to identify available CPUs for use and then upload required packages across each of these selected CPUs in preparation for running the desired functions. It is possible that this approach was too 'rigid' for the dynamic cloud hosting system used by shinyapps.io. Instead, I was able to successfully deploy my app using the foreach package. Therefore whilst my app is still using multiple CPU's, it is only identifying available CPU's at the point they are required by long-running functions rather than attempting to 'hold on' to them in advance.

As PJ suggests, irrespective of the approach I use for parallel processing, my app is very demanding memory wise. Unfortunately, the funding available for this project prices me out off the RStudio Connect option - my search for a solution continues.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.