Testthat motivation

Hi, This question has been more or less asked & ignored before, but I haven't found a good place that discusses the testthat motivation.

TL;DR version: Why are tests written separate from the code they maintain?

My instinct is that the best way to maintain tests is to keep them within their relevant context. IE, I'm building feature x, right then and there in the script I should write a test to make sure that feature x was built to spec.

Sure, I could finish the entire function and then write a test that checks that, but I do still lose granularity in terms of understanding the EXACT point of failure.

I also do understand that testthat is written for function testing but while in my main production environment I do have most of the code embedded in functions, there is some output parsing and conditional actions which I feel are inappropriate to embed into a function... which means there are bits and bobs outside of functions.

SO:

  1. I could copypaste the same code from the function bits to the test files and rinse and repeat everytime I change the function code, or
  2. I could create bits of code, save them in seperate files and them source them into both places. This does create a little bit of an environment hell because the R CMD check doesn't have the expected start folder for tests... but it's nothing a little here::here magic can't solve.

The question does remain, why can't we specify a method where the unit tests that are checked when we test the package are all the assertions embedded in the files themselves? What I would like would be for the testthat file to contain inputs, which I feed into functions that have testthat assertions inside.

Am I way off?

2 Likes

One motivation is that there is a rule of thumb saying that your test code should be approximately 10 times than your actual code. Keeping it all in the same place creates a gigantic mess, so it makes sense to separate.

In the same vein, your functions should not test an implementation, they should test logic so that when you change an actual implementation, your tests still pass. That means that you should (in theory, at least) not know how function is implemented when you are writing tests.

As for things that are not embedded into functions - I'm not sure what you mean. Can you give an example where something like this makes sense? I mean, it's code all the way down, so why not put into function and test?

3 Likes

Let's say we have a function that is wrapped in safely. After we receive this output, we can check it, and if there's any errors, then fire off an email to the person in charge of that section. So it's literally just a if(!is.null(thingie$error), emailsender(blabla)). Say I wanted to check whetehr that email fired... So that's where I would put the expectation right underneath that.

Although it is very interesting to hear this thing about tests not needing to check implementation, but to test the logic... that's really interesting... I can think about that.

The approach to test tasks instead of details can help improve the final code. It lifts you above "How can this be done?" into "What's the most useful result?" You can even write the tests before the functions; this is like keeping your eyes on the finish line in a race.

One possible workflow is to first write ideal code, making up variable and function names as needed. Then create the variables and functions you made up. A package vignette is a great way to do this. When you're writing a vignette to show users how easy and helpful your package is, parts that are difficult or useless become obvious. Noticing it before you write the implementation can save you hours or days of work.

1 Like

I really like this! Of course! How simple! I'm writing code IN ORDER TO x. As soon as I write that as the function output, the code becomes much simpler!

On the Vignette issue, I do like Vignettes and use them for my Open Source work... but for writing internal packages intended to be crontabed I think it's a bit different.

My use case is a bit specific. I'm doing ETL on about 10-20 projects owned by others (with a high variability in terms of the DBA skills on those projects). I generate some viz, push to a data warehouse, and trigger whatever alerts need to be triggered. This runs every 15 minutes. The unit tests that cover the main functions (about 6k lines of dense code) are there to protect me and my team from ourselves, but also (mainly) to help us when we're fixing something that broke for Project A without breaking the function for the rest of the projects. This is the tricky part... I can't ever assume that I know what I'm getting because these are living systems that change quite unpredictably. So my approach is to be more granular with the unit tests, test individual assumptions, like "Does this table actually exist?" "Has this table been updated today?" that kinda stuff...

The other thing I do is whenever a function fails, I wrap that in safely() and then add an error saver that says if !is.na(blah$error) then save the error as a csv. I then compile up all the error files and include it in the error tab of that project's flexdashboard, and feed them to a rollup command center. I also will also be emailing myself and managers if certain inflexible assumptions are broken. But now we have over 10 error file locations... I'm not sure if this is the ideal approach either. Willing to listen!