Blanks lines inserted into testthat reports.

I have a test suite using the testthat framework. I've added some annotations (using cat) to individual tests that run in a loop. Occasionally a blank line appears in the report with just the annotation but not the entry and context string.

It's not very easy to do a reprex for this but I was wondering if anyone has an idea why this should happen?

==> devtools::test()

Loading ds.anomlr
Testing ds.anomlr
v |  OK F W S | Context
/ |   0       | Large data (v1.0.4) unit tests @452 - error
- |   1       | Large data (v1.0.4) unit tests @048 - pass
\ |   2       | Large data (v1.0.4) unit tests @140 - pass
| |   3       | Large data (v1.0.4) unit tests @260 - error
/ |   4       | Large data (v1.0.4) unit tests @016 - pass
- |   5       | Large data (v1.0.4) unit tests @346 - error
 @243 - pass
| |   7       | Large data (v1.0.4) unit tests @374 - error
 @024 - pass
- |   9       | Large data (v1.0.4) unit tests @108 - pass
\ |  10       | Large data (v1.0.4) unit tests @096 - pass
| |  11       | Large data (v1.0.4) unit tests @196 - pass
/ |  12       | Large data (v1.0.4) unit tests @467 - pass
- |  13       | Large data (v1.0.4) unit tests @324 - error
\ |  14       | Large data (v1.0.4) unit tests @420 - error
 @168 - error
/ |  16       | Large data (v1.0.4) unit tests @050 - pass
- |  17       | Large data (v1.0.4) unit tests @476 - error
\ |  18       | Large data (v1.0.4) unit tests @028 - pass
| |  19       | Large data (v1.0.4) unit tests @191 - pass
 @433 - pass
- |  21       | Large data (v1.0.4) unit tests @532 - pass
\ |  22       | Large data (v1.0.4) unit tests @251 - error
| |  23       | Large data (v1.0.4) unit tests @142 - pass
/ |  24       | Large data (v1.0.4) unit tests @019 - pass
- |  25       | Large data (v1.0.4) unit tests @299 - error
\ |  26       | Large data (v1.0.4) unit tests @031 - pass
| |  27       | Large data (v1.0.4) unit tests @586 - error
/ |  28       | Large data (v1.0.4) unit tests @473 - error
- |  29       | Large data (v1.0.4) unit tests @101 - pass
\ |  30       | Large data (v1.0.4) unit tests @141 - pass
| |  31       | Large data (v1.0.4) unit tests @432 - pass
/ |  32       | Large data (v1.0.4) unit tests @069 - error
v |  33       | Large data (v1.0.4) unit tests [7.4 s]
v |   6       | Pipelines (v1.0.4) unit tests
v |  16       | prep (v1.0.4) unit tests [0.2 s]
v |   4       | sim (v1.0.4) unit tests
v |  32       | svd (v1.0.4) unit tests [0.2 s]
v |   9       | T1 (v1.0.4) unit tests [0.2 s]

== Results ============================================================================================================================================================
Duration: 8.1 s

OK:       100
Failed:   0
Warnings: 0
Skipped:  0

Nice code.

Just to add, the body of the loop looks like this. I've simplified it with ...'s because those bits have no context, but it's the cat statement that interupts and forces the display, sometime it gets it right, sometimes not. It is always on the same loop elements, but inspecting the output I can't see any reason, and the tests are all successful.

    if (...) {
        expect_error({...})
        cat(paste(sprintf(" @%03d", i), "- error\n"))
    } else if (...) {
        expect_warning({...})
        cat(paste(sprintf(" @%03d", i), "- warning\n"))
    } else {... %>%  expect_equal(...)
        cat(paste(sprintf(" @%03d", i), "- pass\n"))
    }

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.