R is a single-threaded language. Hence by default it will only use one core for a given process.
The only benefit you would get from using 1 out of 8 cores on a server is the increased clock speed of your CPU due to TurboBoost which will make the code run faster but very likely not as fast as if you were to use all 8 cores at once.
So running 8 R processes on a 8-core machine each in a different source window would be a good use of the 8-core iMac, but maybe not the most efficient from a user perspective.
Using more than one core in R can be achieved programmatically in multiple ways. Some R packages provide support for parallel compute backends. If your code spends most of its time in function calls to such R packages you can possibly speed up your code significantly without changing the code too much or not at all.
Regarding the code of your faculty member: You mention "multiple sections of code that is the same" and "increasing number of simulations". Simulations are typically run multiple times with no interdependencies and hence parallelisation is trivial. If the code already uses
for loops to iterate over the simulations, those can be converted into a
foreach loop registered against a
doMC parallel backend.
* apply functions, those can be converted into
par * Apply functions registered against a parallel backend.
futureverse, it is fairly straightforwards to switch to parallel computing.
- functional programming, you can parallelize the code by using the
furrr equivalent of the
There is also other bespoke tools/R packages for parallelization beyond a single server like
clustermq. Those tools are especially useful if the code eventually also needs to be run on larger compute infrastructures such as HPC clusters.
With the 8 core iMac the faculty member should expect a speed up from parallelisation of up to factor 8, but he/she also should be aware of Amdahls law that basically tells you that the maximum speed up of a code when parallelised critically depends on the fraction of the code that cannot be parallelised.
There is a lot more options that can be considered but: Before any parallelisation effort the most important thing to remember is to optimise the code even when run single-threaded. Increasing vectorisation of the code (replacing simple for loops, ... ) can speed up the code much more than any parallelisation method mentioned above: After all R is an interpreter language, and the less we rely on the interpreter but shove things down closer to the binary execution, the better.
General guidance on performance optimisation in R can be found at 24 Improving performance | Advanced R.
Both performance optimisation and parallelisation of codes can be time consuming but especially if the code is going to be reused (i.e. run more than once) it is very worthwhile to pursue.