Regarding optimization like this, and specifically on rmarkdown, it is important to know what takes time.
- Is this the computation of any results and graphics ?
- Is this the conversion of the R Markdown file to the output ?
With R Markdown, most common workflow is to have everything calculated within the chunks. This works fine for one report, or multiple report if all the data used are pre computed or unique to the parametrized value.
Another workflow would be a two step process:
- Workflow with R Script to prepare all the data. This would output some databases kind of result in CSV files, or other optimized format. This could be optimized using parallelisation and other technics to make thinks quicker. Tables and plots could be precomputed in this step if that helps.
- Then using parametrized repo, the rendering of Rmd file would be essentially some request of the previous results and plot insertion.
This is the kind of workflow that uses caching of result in the first step, so that rendering the publication does not recompute everything if not needed. In you case, what does each rendering of Rmd needs to really recompute and what can be shared accross all documents ? The latter should be done once only for all report. This will save time.
There are tooling to help with such workflow:
- First simple one, is the caching mechanism supported by rmarkdown and provided by knitr
- Second is workflow tool like targets R package.
This tool would help you setup a whole workflow from data entry to report rendering. It allows to define steps with input and outputs, and this allow support parallelisation and caching - it will be clever enough to know what can be parallelize and what have changed or not so when to use caching value.
In my experience, making such use case quicker when rendering several R Markdown documents is a matter of workflow. Obviously a prerequisite is to know what is the bottle neck, and where rely the best candidate for optimization (small improvement to reduce a lot of time).
Hope this helps.