Hello, everybody,
I want to know if it is possible to use the GPU to store R objects that exceed the memory. For example, right now I'm trying to read 4000 images, with the EBImage library, and I have an error that tells me that a 20.8 Mb vector cannot be located
library(keras)
library(EBImage)
library(tensorflow)
mypic <- lapply(pics, readImage)
Error: no se puede ubicar un vector de tamaño 20.8 Mb
Error: Cannot locate a 20.8 Mb size vector
I have Windows 10, 16 GB RAM, Intel Core i7-8750H CPU @ 2.20 GHz, 64 bit operating system an NVIDIA GeForce GTX 1060 GPU, CUDA, Keras and Tensorflow installed, and I was wondering if I could use that extra capacity.
memory.size()
[1] 247.87
memory.limit()
[1] 16234
Session
sessionInfo()
R version 3.6.3 (2020-02-29)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 18363)
Matrix products: default
locale:
[1] LC_COLLATE=Spanish_Chile.1252 LC_CTYPE=Spanish_Chile.1252 LC_MONETARY=Spanish_Chile.1252 LC_NUMERIC=C
[5] LC_TIME=Spanish_Chile.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] EBImage_4.28.1 keras_2.3.0.0 tensorflow_2.2.0
loaded via a namespace (and not attached):
[1] Rcpp_1.0.5 knitr_1.29 whisker_0.4 magrittr_1.5 BiocGenerics_0.32.0 rappdirs_0.3.1 lattice_0.20-41
[8] R6_2.5.0 jpeg_0.1-8.1 rlang_0.4.8 tools_3.6.3 parallel_3.6.3 grid_3.6.3 packrat_0.5.0
[15] xfun_0.16 png_0.1-7 htmltools_0.5.0 tfruns_1.4 yaml_2.2.1 abind_1.4-5 digest_0.6.27
[22] Matrix_1.2-18 htmlwidgets_1.5.1 bitops_1.0-6 fftwtools_0.9-9 base64enc_0.1-3 RCurl_1.98-1.2 zeallot_0.1.0
[29] evaluate_0.14 tiff_0.1-5 rmarkdown_2.3 compiler_3.6.3 generics_0.0.2 locfit_1.5-9.4 reticulate_1.18
[36] jsonlite_1.7.1
Thanks