Installing keras, TF, and python dependencies in base project for class' workspace in Posit Cloud

Hi,

I will be teaching an ML class with ~30 students at the UG level this winter. I am currently trying to set up the base project for my class' workspace in Posit cloud on an organization account so that all students have access to the required packages. I am running into issues while setting up the python-related environment/packages. I am planning to cover basic DNNs with keras, tensorflow, etc.

I have followed the instructions mentioned https://tensorflow.rstudio.com/install/#alternate-versions, but haven't been successful.

I have also tried to use the same snippet of code mentioned https://community.rstudio.com/t/workspace-base-project-and-reticulate-install-for-all-members/128761/5 (a similar issue to the one I am having), but keep getting the error messages below.

Continuing with the previous issue:

Another error message looks like this.

I thought that it could be a space issue (I believe I have 1 GB), so I tried adding pip_options = "--no-cache-dir" to the install_keras piece of code from https://community.rstudio.com/t/tensorflow-installation-in-rstudio-cloud-does-not-work/111847 (I am also getting the same "Killed" and "Error installing package(s)" error messages), but the issue wasn't resolved.

I am having a hard time figuring this issue out. Any suggestions/references/fixes would be greatly appreciated.

Cheers.

1 Like

There’s a report of a successful install a couple of years ago here. To get that version of Tensorflow required avoiding the karas install routine, which is said to install the current version.

An alternative is to host an RStudio server instance on one of your institution’s servers, which has the added advantage of using existing authentication. It provides a near identical experience to Desktop and initial setup is simple.

Thanks for the reference, @technocrat . I tried the code you mentioned. A couple of issues:

First, after the execution of reticulate::install_miniconda("miniconda") line, I got the error

"Killed ... Error creating conda environment 'r-reticulate' [exit code 137]"

I checked and it is a memory issue. Anyways, I continued and the keras::install_keras ... chunk of code gave me the following error.

Is is because miniconda wasn't installed properly, or a separate issue related to the version of tensorflow? What am I missing? Any help would be much appreciated.

Thanks.

I flailed around a bit and think it can be done. I created a project in an empty account and in R

install.packages("reticulate")
install.packages("keras")
install.packages("tensorflow") # not sure this is necessary
reticulate::install_miniconda("miniconda")
Sys.setenv(WORKON_HOME = "virtualenvs")
keras::install_keras(
  method = "virtualenv",
  conda = "miniconda/bin/conda",
  envname = "r-reticulate",
  restart_session = FALSE
)

I got a punch of errors

Killed
Error: Error installing package(s): "'tensorflow==2.9.*'", "'tensorflow-hub'", "'scipy'", "'requests'", "'Pillow'", "'h5py'", "'pandas'", "'pydot'"

I did get a miniconda directory in the project, however, so I navigated to bin there and ran

./conda install tensorflow-hub
./conda install scipy

and these seem to work. I assume the other missing pips can be installed similarly, such as pandas.

I haven't tried it, and it may be challenging, is to create the right virtual environment and identify it to sessions in the R console, but I hope this gets you closer.

Hi @technocrat,

I tried running your code. The reticulate::install_miniconda("miniconda") was working fine, but at the end got 'Killed' with an error code of 137. I was able to install all the packages using py_install, except "tensorflow==2.9.3".

When I tried to run a neural network with the above installation, I received the following error. I could not solve it using the install_tensorflow() function.

I also noticed that the miniconda folder had python3.8, whereas the virtualenvs folder had python3.9. I am wondering if the discrepancy creates an issue, but I am not sure how to specify the version of python.

What seems to be working fine for me is the following from this post.

install.packages('reticulate')
install.packages('keras')
library(reticulate)
library(keras)
virtualenv_create("myenv")
use_virtualenv("myenv", required = TRUE)
install_keras(method="virtualenv", envname="myenv", pip_options = "--no-cache-dir")

I was able to run a basic neural network with the above installation procedure.

Thanks a lot for your time and for looking into this.

1 Like

I got errors when attempting to run with this setup:

2022-12-22 14:55:29.143731: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-12-22 14:55:36.010353: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/R/4.2.2/lib/R/lib:/usr/local/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/java-11-openjdk-amd64/lib/server:/opt/R/4.2.2/lib/R/lib:/lib:/usr/local/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/java-11-openjdk-amd64/lib/server
2022-12-22 14:55:36.010371: I tensorflow/compiler/xla/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2022-12-22 14:56:00.187446: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/R/4.2.2/lib/R/lib:/usr/local/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/java-11-openjdk-amd64/lib/server:/opt/R/4.2.2/lib/R/lib:/lib:/usr/local/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/java-11-openjdk-amd64/lib/server
2022-12-22 14:56:00.187760: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/R/4.2.2/lib/R/lib:/usr/local/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/java-11-openjdk-amd64/lib/server:/opt/R/4.2.2/lib/R/lib:/lib:/usr/local/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/java-11-openjdk-amd64/lib/server
2022-12-22 14:56:00.187773: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.

Thanks for pointing this out, @jkylearmstrong . I don't think that these are errors, they are probably warning (W) and information (I) messages. They are related to GPU installations, which I was not bothered about.

My neuralnet code worked despite these messages. Also, as per the instructions here, I checked whether the installation succeeded with the following chunk of code.

library(tensorflow)
tf$constant("Hello Tensorflow!")

I got the desired message. The link above states

This will provide you with a default installation of TensorFlow suitable for use with the tensorflow R package. Read on if you want to learn about additional installation options, including installing a version of TensorFlow that takes advantage of Nvidia GPUs if you have the correct CUDA libraries installed.

The message you mentioned are probably related to this. Hope this helps.

2 Likes

Thanks but unfortunately I am still unable to run a sample notebook from the Deep Learning with R repo.

install.packages('reticulate')
install.packages('keras')
install.packages("tensorflow")

library(reticulate)
library(keras)
library(tensorflow)

virtualenv_create("myenv")
use_virtualenv("myenv", required = TRUE)
install_tensorflow(version = "cpu")
install_keras(method="virtualenv", envname="myenv", pip_options = "--no-cache-dir")

now the error I'm getting is as follows

Using virtual environment '/cloud/project/myenv' ...
+ '/cloud/project/myenv/bin/python' -m pip install --upgrade --no-user --ignore-installed 'tensorflow-cpu==2.11.*'
Collecting tensorflow-cpu==2.11.*
Killed
Error: Error installing package(s): "'tensorflow-cpu==2.11.*'"

This install procedure seems to have worked

install.packages('reticulate')
install.packages('keras')
install.packages("tensorflow")

library(reticulate)
library(keras)
library(tensorflow)

virtualenv_create("myenv")
use_virtualenv("myenv", required = TRUE)
install_tensorflow(method="virtualenv", 
                   envname="myenv", 
                   pip_options = "--no-cache-dir", 
                   version = "cpu")

install_keras(method="virtualenv", 
              envname="myenv", 
              pip_options = "--no-cache-dir",
              version = "cpu")

but when attempting to run this Deep Learning with R notebook 2.1

I am getting

2022-12-22 22:14:46.706060: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.

Looks like informational only.

If it was able to run that would be fine, unfortunately it's not the code without the Rmd is here

library(keras)

mnist <- dataset_mnist()
train_images <- mnist$train$x
train_labels <- mnist$train$y
test_images <- mnist$test$x
test_labels <- mnist$test$y

network <- keras_model_sequential() %>% 
  layer_dense(units = 512, activation = "relu", input_shape = c(28 * 28)) %>% 
  layer_dense(units = 10, activation = "softmax")

network %>% compile(
  optimizer = "rmsprop",
  loss = "categorical_crossentropy",
  metrics = c("accuracy")
)

train_images <- array_reshape(train_images, c(60000, 28 * 28))
train_images <- train_images / 255

test_images <- array_reshape(test_images, c(10000, 28 * 28))
test_images <- test_images / 255

train_labels <- to_categorical(train_labels)
test_labels <- to_categorical(test_labels)

network %>% fit(train_images, train_labels, epochs = 5, batch_size = 128)

metrics <- network %>% evaluate(test_images, test_labels, verbose = 0)
metrics

when I run

mnist <- dataset_mnist()

that is when the error

2022-12-22 23:35:37.758246: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.

comes up and Posit Cloud dies

Ah. That’s a crash, not an error. I’ll flag this for the cloud guys.

1 Like

Thank you! I appreciate it!

It seems like an informational message rather than an error. And Posit Cloud crashing seems to be an issue with the cloud system/environment rather than your installation or code. Are you working on a free account?

I am using an institutional account. Even that doesn't allow me to work with such a huge dataset and sufficiently sized network structure as you have. I have also had this issue with the Cloud crashing and quitting.

I am planning to consider the first 100 or 1000 observations of this dataset, with a small-ish network to get the idea across to my students. I have also seen some texts recommend not to run neural nets on any cloud system.

If it helps you can try to access the datasets from the dslabs package and use it accordingly. I am at least able to access the data using the following.

install.packages("dslabs")
mnist <- dslabs::read_mnist()

All the best.

1 Like

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.