Which of these shiny::testServer usages are most scalable?

Hello!

I'm really enjoying the new testServer() function from shiny (if anyone reading this contributed to that: thank you!!)

When things in testServer fail, I would like them to give me an informative error code. I have a few different ways that I could do that. Do you have any wisdom to offer about which approach is best in the long run?

I'll use the following test module to demonstrate the different approaches:

# shiny 1.4.0.9003
library(shiny)

example_module <- function(id) {
  moduleServer(
    id, 
    function(input, output, session) {
      get_answer <- reactive({
        1
      })
    })}

Approach 1: stopifnot

testServer(example_module, {
  stopifnot(get_answer() == 5)
})

Message: Error in stopifnot(get_answer() == 5) : get_answer() == 5 is not TRUE

This is the main approach recommended in the documentation. I think that error message is okay, but I feel like it would be pretty hard to read out of context.

Approach 2: expect_equal

library(testthat)

testServer(example_module, {
  expect_equal(get_answer(), 5)
})

Message:

Error: get_answer() not equal to 5.
1/1 mismatches
[1] 1 - 5 == -4 

This message is better, I think. But it would be great if I could write down an expectation to let my future self know what I was trying to test here. For example: what if I'm testing a value that changes after an input is set, so I test it before and after--which of these two scenarios failed? That's exactly what test_that does:

Approach 3: Separate test_that calls

test_that("The get_answer() function returns 5 at the start of the app", {
  testServer(example_module, {
    expect_equal(get_answer(), 5)
  })
})

Message:

Error: Test failed: 'The get_answer() function returns 5 at the start of the app'
* get_answer() not equal to 5.
1/1 mismatches
[1] 1 - 5 == -4

This is my favorite so far, but I'm worried about how well it will scale. It means a separate testServer() call for each test (or small group of tests)--will that be really slow with lots of tests? As a proof of concept, I can validate that hunch:

ptm <- proc.time()
test_that("The get_answer() function returns 5 at the start of the app", {
  for(i in 1:100) {
    testServer(example_module, {
      expect_equal(get_answer(), 5)
    })
  }
})
print(proc.time() - ptm)

Returns:
user system elapsed
1.359 0.033 1.542

And

ptm <- proc.time()
test_that("The get_answer() function returns 5 at the start of the app", {
    testServer(example_module, {
      for(i in 1:100) {
        expect_equal(get_answer(), 5)
      }
    })
})
print(proc.time() - ptm)

Returns:
user system elapsed
0.207 0.019 0.493

So yeah, that may end up being a pretty big time difference when testing a whole package.

Approach 4: custom assert

I thought that maybe a custom assert_that might work here--so I can write separate expectations for each test, but put everything/a lot in one big testServer call. Like this:

assert_that <- function(error, expr, msg = "") {
  if (!expr) {
    print(msg)
    stop(paste0("Failed: ", error), call. = FALSE)
  }
}

testServer(example_module, {
        assert_that("The get_answer() function returns 5 at the start of the app",
                    get_answer() == 5,
                    paste0("Expected: 5 | Actual: ", get_answer()))
})

Output:

[1] "Expected: 5 | Actual: 1"
 Error: Failed: The get_answer() function returns 5 at the start of the app 

This approach seems to work as quickly as Approach #2, but with as informative of a message as Approach #3, but it does so at the expense of code brevity: it's damn verbose.


Is this something I should post in the shiny repo's issues tab? Is there a feature like this that already exists/is in the works but I'm missing? Or is one of my four approaches clearly better than the others?

Thanks! :slight_smile:

This topic was automatically closed 54 days after the last reply. New replies are no longer allowed.