Thanks for the suggestions! futures, doParallel, batchtools, etc. could definitely help with making long-running jobs run quicker, but I am worried about dealing with external bioinformatics software that can take a while run. With Jupyter Notebooks, I can document all bash calls to software for genome assembly, BLAST, metagenome analyses, etc., and these bash jobs can take a long time to run (eg., a long BLAST job). If I try to do with with Rmarkdown + knitr, can I call a bash job in a bash code chunk and then switch to a different project while that bash job takes hours or days to run, or am I stuck either waiting for the bash job to complete, or keeping the bash job external from Rmarkdown + knitr because it takes too long?
In other words, does Rmarkdown + knitr really work for documenting a bioinformatics pipeline? I know that I could use snakemake or other pipelining software for such cases (and I sometimes do this), but that can require a lot of setup, which isn't needed for relatively simple pipelines.