importing and transforming many csv files

I am importing and transforming many files in an inefficient way. I was wondering if someone could show me a faster way using purr. The following code is what I tried, and it should give you the general pattern:

data_2013 <- read_csv("data_2013") %>%
clean_names() %>%
mutate(year = "2013")

data_2014 <- read_csv("data_2014") %>%
clean_names() %>%
mutate(year = "2014")

With the years increasing by 1 (all the way up to 2019). I eventually join all these files, as the key variables are the same. But clearly the importing of the files is inefficient.

Any help would be greatly appreciated

You can try map_dfr in the purrr package.

See article:

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.