bench::mark() is used to benchmark one or a series of expressions, we feel it has a number of advantages over alternatives.
Always uses the highest precision APIs available for each operating system (often nanoseconds).
Tracks memory allocations for each expression.
Tracks the number and type of R garbage collections per expression iteration.
Verifies equality of expression results by default, to avoid accidentally benchmarking inequivalent code.
Has bench::press() , which allows you to easily perform and combine benchmarks across a large grid of values.
Uses adaptive stopping by default, running each expression for a set amount of time rather than for a specific number of iterations.
Expressions are run in batches and summary statistics are calculated after filtering out iterations with garbage collections. This allows you to isolate the performance and effects of garbage collection on running time (for more details see Neal 2014).
The times and memory usage are returned as custom objects which have human readable formatting for display (e.g. 104ns ) and comparisons (e.g. x$mem_alloc > "10MB" ).
bench::mark() will throw an error if the results are not equivalent, so you don’t accidentally benchmark inequivalent code.
I've tried microbenchmark before but because of its internal working method it produced out of memory errors when benchmarking functions that need a lot of memory.
Maybe I can also use the package profmem, though I would prefer something integrated and simple.