How to get memory information when benchmarking?

Hello.

I'd like to time how several regression models are computed using different libraries.
Till now I was using something like this:

benchmark(
"mod1" = {mod1 <- glm(varOUT~var1+var2+var3+varfact+City, data=myDF, family = "binomial")},
"mod2" = {mod2 <- glmer(varOUT~var1+var2+var3+varfact+(1|City/ID),data=myDF,family = "binomial")},
"mod3" = {mod3 <- glmmTMB(varOUT~var1+var2+var3+varfact+(1|ID),data=myDF,family = "binomial")},
}, replications=1)

But now I would also like to add information about how much memory (maximum) is used by each of these regressions. How can I do it?

Regards.

This is basically benchmarking memory allocation, take a look at this SO thread

Have you tried the bench package?

Here's from the GitHub page:

Features

bench::mark() is used to benchmark one or a series of expressions, we feel it has a number of advantages over alternatives.

  • Always uses the highest precision APIs available for each operating system (often nanoseconds).
  • Tracks memory allocations for each expression.
  • Tracks the number and type of R garbage collections per expression iteration.
  • Verifies equality of expression results by default, to avoid accidentally benchmarking inequivalent code.
  • Has bench::press() , which allows you to easily perform and combine benchmarks across a large grid of values.
  • Uses adaptive stopping by default, running each expression for a set amount of time rather than for a specific number of iterations.
  • Expressions are run in batches and summary statistics are calculated after filtering out iterations with garbage collections. This allows you to isolate the performance and effects of garbage collection on running time (for more details see Neal 2014).

The times and memory usage are returned as custom objects which have human readable formatting for display (e.g. 104ns ) and comparisons (e.g. x$mem_alloc > "10MB" ).

Maybe it won't work for me because it says:

bench::mark() will throw an error if the results are not equivalent, so you don’t accidentally benchmark inequivalent code.

I've tried microbenchmark before but because of its internal working method it produced out of memory errors when benchmarking functions that need a lot of memory.

Maybe I can also use the package profmem, though I would prefer something integrated and simple.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.