Best practices for setting up RStudio Server Pro/Connect in Enterprise environment

Hi everyone,

Hoping to gather some collective knowledge from this audience to help drive best practices at installation and configuration. We currently operate a two server environment (Dev/Prod) on RHEL 7.4 with RStudio Server Pro (Dev) and RStudio Connect (Prod).

We are currently on officially supported IT infrastructure but have strong headwinds that threaten our stability. We will be implementing an IT policy where all servers will be updated every month using "yum upgrade" and restarted. We also need to try and install/run applications as service accounts, not root. And finally, we need to isolate the package installation and move them to mounted directories.

The major concern I have is the yum upgrade. This threatens our R installation and R library dependencies. The RStudio Server Pro allows compilation from source and the use of mounted directories in /opt/ for multiple concurrent versions. But who's to say that a new RHEL release of libxml won't break all the dependent packages? Does each library need to be compiled against every RHEL dependency? If it does, even packrat wouldn't save us.

The other issue is consistency. Is there a simple way to force RStudio Connect to use specific versions of R when it is pushed to? From what I gather, each application is re-compiled against whatever version of R exists on the Prod server. I'm guessing it has similar capabilities to look in /opt/ for versions of R that are prepared?

Lastly, can anyone provide guidance on how they handle changes to Production applications? Do you consider each app/dashboard an application? Or do you consider the RStudio Connect environment the application? I can see both sides of this and I'm not sure which would be appropriate.

Thanks for the help!

2 Likes

Hi @wolfpack

Do you currently use a staging server as part of your analytic architechture?

This sounds like a great time to advocate for the necessity of a staging server. This article from @nathan covers some of the common use cases of staging servers w/r/t RStudio products: https://support.rstudio.com/hc/en-us/articles/360007833814-RStudio-staging-servers

Use case 1: Testing your computing environment
You can use the staging server to test changes to your compute environment. For example, when you upgrade Linux, R, or RStudio, you will want to test the upgrade in the staging server first. You can also test product configurations before applying them in production. Using staging will help you roll out consistent changes across your environment.

1 Like

Hi @kellobri,

We do not have a staging server due to lack of funding. That has been one of our major frustrations with the 1-copy-installation with the basic RSConnect license.

Wow, yeah. I can see how frustrating that is. Not having access to a staging server is really going to put you in a tough spot. Have you been thinking about any creative free/low-cost hold over type solutions for mimicking a staging environment?

Obviously having funding for an RStudio Connect staging license would be ideal. But if that option is going to be off the table for the time being, you could potentially create a staging environment without the professional products, simply with the goal of validating R package dependencies against system library updates.

I like using VirtualBox + Vagrant + a Configuration Management Tool (like Ansible) on my local machine to set up small testing environments. There are Vagrant boxes freely available for many of the major linux distributions. You could even install Shiny Server Open-Source and test some manual data product deployments. I know that going this route would put a lot o the work back on you. And doing this kind of testing every month sounds like more than I would be comfortable taking on. There are certain parts that could probably be automated, but it will likely take a while to decide what your goals are and how to go about implementing an automation.

I hope you continue to share what your findings and what your solution wins turn out to be along the way!

A few ways to handle this. I generally think of each app / dashboard as an application, and Connect allows reverting to previous versions / etc. You can also use "vanity URLs" to abstract away a specific application endpoint from the URL (moving that vanity URL to a new piece of content, after testing, can become the new "live" version).

The problem we/you really need to test is - what does yum upgrade do to already built R packages. From yum upgrade's perspective, you are definitely affecting the whole Connect server, but I am hopeful that there will be some level of protection in the way packages are compiled/upgraded and that you will only need to worry about backwards incompatibilities in the system libraries (which are hopefully few/far between).

This sounds like you are interested in learning how Connect chooses an R version? The short answer is that it generally tries to match R version as closely as it can to the one used in development. The longer answer is that you are in control, and it can be configured as you like (depending on the R versions that you make available).

I definitely am interested to see what happens to R packages when the underlying dependency is upgraded. I'm also curious if your org could run the yum upgrade on RStudio Server Pro first, so at least you have some runway to debug issues before deploying those same library changes to RStudio Connect. As @kellobri suggested, maybe some type of configuration management (Ansible, Puppet, Chef, Terraform, etc.) will help keep the boxes in sync so your RSP server becomes your "staging" server of sorts.

1 Like

Thank you @kellobri and @cole for the responses.

I am very concerned about the updated system libraries caused by yum upgrade. We have pitched the idea of forced versioning on specific system libraries but that was not an acceptable to our IT group.

Theoretically it should not be a major concern since packages like gdal and libxml aren't likely to have major changes, but who knows. It would be nice if Connect offered some way to use specific system libraries that are pre-compiled (like packrat). I guess this is how people use docker but that would require an unknown level of effort.

@cole, you comment about using a production vs. development version with vanity URLs is how we currently handle this issue. My question was (unclearly) focused on how to handle this in an SLA environment. What are the thoughts on having an SLA for each app vs. the Connect application?

@kellobri I think the idea of using the Shiny Server as the pseudo staging environment might be the interim solution. Our upgrade pathway is to roll out updates to Dev servers 1 month prior to production. This would allow us to test changes prior to them being made on Prod. Maybe we could run some automated scripts using shinytest on those applications but we cannot take advantage of Connect since it lives on Prod. I guess we'll just write some cron jobs instead.

1 Like

Sorry for the late response on this. I am very curious to dig (or for someone else to dig :stuck_out_tongue: ) and see what happens to R / R packages when yum upgrade happens.

I thought I said this, but maybe I just thought it. The SLA for the Connect application would obviously impact the others - if Connect is down for 30 minutes, then every app is going to be down for 30 minutes. So I could definitely see the Connect SLA taking priority.

On the other hand, I do not imagine that every app would need an SLA. Depending on how Connect is used, publishing is easy, and some documents / applications may be pieces of development / works in progress. I would expect that App-SLAs would be most fitting on applications that are having heavy traffic / usage or are very central to the business (i.e. downtime affects someone more than just a minor inconvenience). You can definitely "Over SLA" and end up as a super-stressed-out firefighter having to track down every tiny little issue that may surface in a Shiny application (which is especially problematic when first learning Shiny / error handling + reactivity, etc.).

Not sure what your business believes about SLAs, but I would generally shy away from promising up-time unless there is business value in that up-time being guaranteed. Again, if an app only gets a few casual/non-critical users per month, how much developer time is it really worth to ensure the app is up and 100% functional for 99% of the month? Seems like a waste IMO :slight_smile: (i.e. nobody would pay for it if they had to)

1 Like

We finally convinced IT to blacklist the R dependencies until we could figure out a risk mitigation strategy. Fingers crossed that Connect doesn't go down!

I agree with your opinion about SLAs. Most apps do not require 100% uptime and should not need them. But other apps, specifically ones where they are critical for job function, likely need to have an SLA.

I've started logically binning Shiny apps into 4 categories:

  1. Prototype. These are proof of concept pieces that likely do not have validated code and are intended to show a potential capability.\
  2. Unvalidated App. These are apps where someone built them to perform a specific function but never went through QC/Testing/Documentation. We are trying to eliminate these, but they are likely common in new developer environments.
  3. Validated App (no SLA). These are apps that perform a function and have been validated and tested. It has been consciously determined (and documented) that an SLA is not necessary as downtime is acceptable or it is not critical for a job function.
  4. Validated App (SLA). These are apps that perform a critical function and have been validated and tested. These apps are required for individuals to perform their role so minimizing downtime is critical. Therefore an SLA should be implemented.

Another thing I want to add if someone finds this is how to handle app updates. We discussed this extensively and debated on pushing with a service account or creating new apps for each version. We settled on new apps for each version as it allows for simple regression testing, easy validation (as its already on Prod), and easy implementation (changing vanity URL).

2 Likes

I like this a lot. I have heard similar proposals in other "Data Governance" documentation. I know some BI environments will use badges to signify a validated app, thereby giving the viewer very clear visual feedback that they are looking at validated content.

I think this sounds like a good call, with one important caveat: because you do not have a staging server. I think the ideal architecture (from a validation perspective) would include a staging server, and then the service account approach would be more feasible (esp. once we have made programmatic deployments easier).

One more thought to keep in mind - when you migrate to a new endpoint on Connect (by modifying the vanity URLs), you will need to take into account app configuration (runtime settings, schedules, etc.), ACLs, titles, descriptions, and the like. Today, there is no way to programmatically transfer that information, but the Connect Server API is the place to monitor for those features as we add functionality each release! (I would expect that these features in particular are probably pretty far off, but wanted to share the thought process nonetheless!)

2 Likes

There could be 1000 of packages in use, who might be using 100's libraries; how did you identify each library?
Or did you mean you have convinced IT department to pause yum upgrade on R server?

Good question. We isolated the required packages by seeing what had been installed through YUM. I was able to list them and dump them into a file to review

$ rpm -qa --last > out.txt

This allowed me to see what package was installed and when. It took a little time to review, but I believe I isolated all the appropriate packages. In all there were probably 20-30 packages to isolate.

On a side note, this topic came up at R/Pharma this year. I think there was a package or website that links R libraries to linux dependencies, but I cannot recall where it exists.

You are probably referring to GitHub - r-hub/sysreqsdb: SystemRequirements mappings for R packages and GitHub - r-hub/sysreqs: R package to install system requirements.

3 Likes

This command does help to identify yum install list; I have around 700 packages installed; Is there a way I can associate each of them.

I'm not aware of any way to post-hoc match linux libraries to R packages. Your best bet is to use the links rstub provided (above), look for the relevant R packages and document the linux library dependencies. It's not automated but it should probably only take 30-60 minutes.