With a few colleagues, we are developing a series of learnr tutorials for students in my class. The tutorials are included in a package, together with some datasets and useful functions for the students to use.
Since the packages we use in the tutorial (and learnr itself) are constantly updated (CRAN, Github) and we all have to frequently solve github merge conflicts, I am a little concerned that some parts of the tutorials may not continue to work perfectly.
For "normal" functions, I always use testthat to ensure they continue to work properly. Is there a way to use testthis (or something like it) for learnr tutorials? In particular, I would like to ensure that the code used in a particular exercise still provides the correct result, that plots are still correct, that data load properly, et cetera.
In an ideal world, I would like to be able to have testthat check that exercise 3 in tutorial 4 gives the correct output, and that the code in exercise 4 in tutorial 5 does not give an error and that the grading of exercise 2 in tutorial 8 continues to work correctly, et cetera.
Right now, all I can do is to go through the list of long tutorials and test each exercise manually, and this is quite time-consuming and highly inefficient.
Is there anyway to do such a thing using testthat/something-like-testthat?