R, GPU, and AWS

I have a vector of 500 elements or so. I have a function that loops through each element and apply an algorithm. The function will output a dataframe with equal length to the input vector of 500 elements or so. Each row of the output dataframe that is mapped from an element in the vector can be computed independent of the other elements in the vector.

This is a classic case of a problem to be solved by a GPU, or multiple cores.

How do I get started with GPUs and AWS as an R programmer?

  1. Would it be accurate to say you have a function that takes 1 input parameter?
  2. You want to call that function 500 times with 500 different parameters.
  3. You want to do that with more than one function evaluation executing in parallel.

How you do that depends on what the function is. Does it use an algorithm that can take advantage of a GPU? Someone has to write code to make an algorithm use multiple CPU cores or cores and memory on a GPU. Sorry for all the questions but these are important details to understand before proposing what might or might not work.

Yes, the 3 assumptions that you made summarize my problem.