Here's code that benchmarks the time to run an empty loop one million times, and compares that to the time to loop with a single assignment in that million-iteration loop:

```
nums <- rep(1, 1e6)
j <- 1L
microbenchmark::microbenchmark(
{
for (i in nums) {
}
},
{
for (i in nums) {
j <- 1L
}
}
)
```

On my laptop, I get the following results:

```
Unit: milliseconds
expr min lq mean median uq max neval
{ for (i in nums) { } } 13.52649 14.15539 15.31180 14.65343 16.01338 22.20064 100
{ for (i in nums) { j <- 1L } } 55.51761 58.48899 63.65749 61.59985 67.21565 97.32986 100
```

Looking at the median time, the loop overhead is about one third of the time that the single assignment takes. If you are doing any worthwhile calculation in the loop, then the loop functionality itself is not the slow part. Hence why people are pointing at alternative methods of code optimization: if you want a "fast for loop", use `base::for`

. No package is going to be capable of reducing the loop overhead enough to matter without other optimizations.