Saving test info from testthat

Hi there,

I'd like to be able to keep a log of the testthat results somewhere so I can look at date/timestamped tests.

As I understand it I should be able to use a "Reporter" to do this, but I can't seem to find examples on the testthat documentation. The closest I can find is the "reporter" helpfile, Manage test reporting — Reporter • testthat, but I can't find any examples on how to actually use and unpack these. I also can't seem to find much info about Reporters here? Test reporter: minimal. — MinimalReporter • testthat I see that they are mentioned in NEWS: Changelog • testthat

Yes, I can check GH actions tests, but sometimes those aren't accurate for GH related reasons.

I should also note that @yoni_cd 's https://github.com/yonicd/covr package does nearly everything I want, however I have run into some minor issues to do with my pkg that break things.

I got some good advice from Miles McBain, and Mark Padgham, which noted things like this:

However when I run my test like so:

out <- testthat::test_local(reporter = testthat::ListReporter)

My tests run, and then I get an error - here's the full output:

> out <- testthat::test_local(reporter = testthat::ListReporter)
✔ Initialising python and checking dependencies ... done!               


running 4 chains simultaneously on up to 8 CPU cores
    warmup ========================================== 50/50 | eta:  0s          
  sampling ========================================== 50/50 | eta:  0s          
running 4 chains simultaneously on up to 8 CPU cores
    warmup ============================================ 5/5 | eta:  0s          
  sampling ============================================ 5/5 | eta:  0s          
NOTE: When using GPU, the random number seed may not always be respected (results may not be fully reproducible).
For more information, see details of the `compute_options` argument in `?calculate`.
You can turn off this message with:
`options(greta_gpu_message = FALSE)`
running 4 chains simultaneously on GPU
    warmup ============================================ 5/5 | eta:  0s          
  sampling ============================================ 5/5 | eta:  0s          
Metal device set to: Apple M1

systemMemory: 16.00 GB
maxCacheSize: 5.33 GB

Metal device set to: Apple M1
Metal device set to: Apple M1


systemMemory: 16.00 GB
maxCacheSize: 5.33 GB


systemMemory: 16.00 GB
maxCacheSize: 5.33 GB

i Installing python modules into 'greta-env-tf2' conda environment, this may ...
x Installing python modules into 'greta-env-tf2' conda environment, this may ...

Error: Test failures
In addition: Warning messages:
1: In for (name in names) { :
  closing unused connection 6 (<-localhost:11548)
2: In for (name in names) { :
  closing unused connection 5 (<-localhost:11548)
3: In for (name in names) { :
  closing unused connection 6 (<-localhost:11548)
4: In for (name in names) { :
  closing unused connection 5 (<-localhost:11548)
> out
NULL

So my question I guess goes out to those folks with unit testing experience:

I'd like to be able to keep a log of the testthat results somewhere so I can look at date/timestamped tests.

Any help would be much appreciated!

Cheers,

Nick

1 Like

The check reporter saves info about failed tests into an RDS file: testthat/reporter-check.R at 9cd6e01be008376b1f7f2d8d528d725b87c0d01e · r-lib/testthat · GitHub

So if you use testthat::test(reporter = "check") or just run R CMD check with the default testthat setup, you'll have an RDS file with problems.

Is this what you want?

2 Likes

This topic was automatically closed after 45 days. New replies are no longer allowed.


If you have a query related to it or one of the replies, start a new topic and refer back with a link.