I am looking to 'batch' download files from a particular google drive sub-directory. jennybryan has a great reprex and explanation. This will download all files to the current working directory. I want to sequester the downloads to temporary directory which is recycled at sessions end.
library(googledrive) library(purrr) ## store the URL you have folder_url <- "https://drive.google.com/drive/folders/0B7tJg2i5HAo2c0VzVFVhLUdQcnM" ## identify this folder on Drive ## let googledrive know this is a file ID or URL, as opposed to file name folder <- drive_get(as_id(folder_url)) ## identify the csv files in that folder csv_files <- drive_ls(folder, type = "csv") ## download them walk(csv_files$id, ~ drive_download(as_id(.x)))
I'd like to change the target directory but return an error
## create the temporary directory csv_dir <- tempdir() ## download the files walk(csv_files$id, ~ drive_download(as_id(.x), path = csv_dir, overwrite = TRUE)) ## error message Error in curl::curl_fetch_disk(url, x$path, handle = handle) : Failed to open file
I have devised a work-around by
setwd(csv_dir) running the
walk() call, then
setwd(old). I am curious why I fail when specifying the target directory in the call to
walk. I can specify the target directory when calling
drive_download() on a single file and execute successfully.