Best practice for .lib when installing R packages

I have a question regarding the best practice when installing packages.

I have had some problems lately with packages that require compiling that gives errors when it requires compiling (specifically the package digest and tmaps). I've done some research and found the following code on stackoverflow.com

.libPaths(c(.libPaths()[2], .libPaths()[1]))

This solved the problem to install the package but I then realised that some of my packages live in the "C:/Program Files/R/R-3.6.2/library" folder where others are in the "~/Documents/R/win-library/3.6" folder. So when I run RStudio as administrator I have access to all the packages but when I run RStudio as a user I only have access to the packages installed in the "~/Documents/R/win-library/3.6" folder with all the resulting errors.

This brings me to my question: what is best practice to install R packages and RStudio settings to avoid this work around? What is best practice folder structure for R, R Libraries and RStudio for Windows users?

Any comments and guidance will be appreciated.

I don't think there is a single best practice, it depends on your use case, for example, for personal use I prefer to edit the lib path on my Rprofile.site file to always install all packages on the system level library but this wouldn't be possible on a working environment where you don't have administrative privileges.
So I think you would get a better answer if you describe your constraints.

Thank you @andresrcs for the response.

To expand on the problem that I'm experiencing.
When I open up RStudio and run the command library(sparklyr) I get the following error:

Error: package or namespace load failed for ‘sparklyr’:
package ‘digest’ does not have a namespace

If I then run the function .libPaths(c(.libPaths()[2], .libPaths()[1])) which basically changes the order of libraries around then I can run the command library(sparklyr) without error.

I have since reinstalled package::digest in the user directory (which previously was not possible) and now all works perfectly. Can't explain this but problem solved. I still think that there should be some form of "best practice" guidance on system setup for data scientists explaining the pros & cons of the location of R and R packages.

I think renv, might evolve into a best-practice.
https://rstudio.github.io/renv/articles/renv.html

In my opinion renv is already a good practice for serious work, but having multiple per project package libraries could be an issue depending on your setup because of the storage space you would be allocating if you use it for everything, so I think it is better to use it for serious work and not for exploring or kidding around.

it has the global cache option, which I think can help with storage space
https://rstudio.github.io/renv/articles/renv.html#cache

Though I have to admit, I haven't started using renv myself... I'm waiting for an opening in my projects schedule to rejig my workflow.

Yes it would help indeed, but as I understand the way it works, would you have to be always into a project to take advantage of the cache? It still seems to me like an added layer of complexity that it wouldn't be always needed.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.