In these sorts of situations where you want to apply the same task iteratively, I find it is usually best to think about how you would do it for one item, create a function for this, and then it becomes much simpler to apply that function iteratively.
For example, we can create the function engine() which does all the things you want: it loads the CSV, finds distinct observations, and then saves out the file. The function works on a single pair of data, the input CSV filename, and the output CSV filename. Then, walk2() is the function that applies it iteratively to each data set in turn.
library(dplyr)
library(purrr)
library(fs)
inputs <- dir_ls("/Users/my_name/my_folder", glob = '*.csv')
outputs <- path_file(inputs)
engine <- function(input, output) {
data <- read.csv2(input)
data <- distinct(data, POS, .keep_all = TRUE)
write.csv(data, output)
invisible(data)
}
walk2(inputs, outputs, engine)
Just be careful you do not overwrite the original files, unless that is your intent. I generally like to use full paths for both input and output, to be safe.