My GPU is not working using Keras and Tensorflow

Dear all,

This is my first post here!
I am close to total despair about keras and tensorflow-gpu: My aim was to used GPU instead of CPU to process simulations, because I read it should be faster.
I spent two days to understand how to set up properly all the packages using Anaconda (I am a neophyte on it). After follow each available post, I finish to obtain something looks like it was working:

py_module_available("tensorflow")
[1] TRUE
py_module_available("keras")
[1] TRUE
k = backend()
sess = k$get_session()
2019-09-27 18:52:26.405087: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: Quadro P3000 major: 6 minor: 1 memoryClockRate(GHz): 1.215
pciBusID: 0000:01:00.0
2019-09-27 18:52:26.405457: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-09-27 18:52:26.406082: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2019-09-27 18:52:26.406333: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-09-27 18:52:26.406616: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2019-09-27 18:52:26.406924: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2019-09-27 18:52:26.407561: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4712 MB memory) -> physical GPU (device: 0, name: Quadro P3000, pci bus id: 0000:01:00.0, compute capability: 6.1)

But after try to run my simulations, I realized that the GPU was not solicited:
Screenshot here: https://image.noelshack.com/fichiers/2019/39/5/1569582491-gpu.jpg

So I ran the nvidia-smi cmd, the rsession.exe process is present in the GPU, but the GPU memory usage = N/A...
Screenshot here: https://image.noelshack.com/fichiers/2019/39/5/1569582492-nvidia-smi.jpg

Do somebody can help me ? :sob:

Can you say something about the types of models you are building using keras or tensorflow? Not all operations are equally intensive in their utilization of GPU.

I have found that training an image recognition task, using many convolutional layers, consume a substantial amount of GPU, but other types of model doesn't necessarily do so.

I suggest you train a standard convolution task, e.g. the classical MNIST and observe the GPU in action while the model is training.

From your snippets above, it certainly seems like your GPU is configured correctly, but if you try the MNIST example and report back on what you find, it may help to confirm one way or the other.

Also, I don't think you need anaconda to do this, and I suggest you follow the instructions at https://tensorflow.rstudio.com/tools/local_gpu.html.

Thank you very much to quickly answered.
The models I am building are about phylogenetic reconstructions and phylogenetic signal estimations.
You was right, my GPU is working when running when I run classical MNIST tasks.
Is that mean there is no way to use GPU cores for reconstruction tasks (or any other kind of simulations) ?

Thank you again for your answer

I'm glad you confirmed that your GPUs are in fact working correctly.

It's not an trivial task to identify why your code is running slowly / taking a long time to compute. Without inspecting your code, it's hard to comment. Passing the workload to GPU is only one strategy, and I recommend the book "Efficient R Programming", by Colin Gillespie and Robin Lovelace.

In addition, I recommend you learn how to use the profiler from within RStudio, e.g. by reading Profiling with RStudio. Code profiling is a useful way to determine where you have bottle necks.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.