Hi, This question has been more or less asked & ignored before, but I haven't found a good place that discusses the testthat motivation.
TL;DR version: Why are tests written separate from the code they maintain?
My instinct is that the best way to maintain tests is to keep them within their relevant context. IE, I'm building feature x, right then and there in the script I should write a test to make sure that feature x was built to spec.
Sure, I could finish the entire function and then write a test that checks that, but I do still lose granularity in terms of understanding the EXACT point of failure.
I also do understand that testthat is written for function testing but while in my main production environment I do have most of the code embedded in functions, there is some output parsing and conditional actions which I feel are inappropriate to embed into a function... which means there are bits and bobs outside of functions.
SO:
- I could copypaste the same code from the function bits to the test files and rinse and repeat everytime I change the function code, or
- I could create bits of code, save them in seperate files and them source them into both places. This does create a little bit of an environment hell because the
R CMD check doesn't have the expected start folder for tests... but it's nothing a little here::here magic can't solve.
The question does remain, why can't we specify a method where the unit tests that are checked when we test the package are all the assertions embedded in the files themselves? What I would like would be for the testthat file to contain inputs, which I feed into functions that have testthat assertions inside.
Am I way off?