`devtools::test()` passes but `devtools::check()` does not

Hi folks,

I've got an issue where devtool::test() passes, but devtools::check() does not. In addition to this, my GitHub Actions all pass, but the failures I get only occur on devtools::check() locally.

I've read through some of the past topics that describe this issue:

As well as the GitHub Issue thread on this topic: Usual Suspects for specific failure modes ยท Issue #483 ยท hadley/r-pkgs ยท GitHub

Some of the things I've tried include:

  • Running R CMD build then R CMD check --as-cran - this showed no errors. Although we do skip a bunch of tests on CRAN since we cannot get tensorflow installed on CRAN
  • removing all source() calls inside tests (this has resulted in other tests failing, but not the same test errors I am getting)
  • removed unnecessary library() calls from inside tests (commit) and from testthat.R file (commit)

Some things that I suspect could be the issue:

  • There are still some uses of accessing unexported functions from the greta package, e.g., greta:::check_op
  • There is a helpers.R file inside the testthat folder which calls library(fields) and library(tensorflow) which might upset things, potentially?
  • Inside every test_that() call, greta will check for a python installation, and sometimes it does not find it. I'm wondering if perhaps something is happening there.

Overall, the issue is that I get about 248 tests failing due to a Python environment not being found, but only on devtools::check() and not devtools::test() (these all pass fine)

This doesn't happen on GH actions, only locally.

Any suggestions are very welcome!

Current process is here:

:wave: @njtierney!

Not a solution but I see greta/test_extract_replace_combine.R at e8a52282442cdba16a4fa0d8126152fb65abf37e ยท greta-dev/greta ยท GitHub that you do not need as helper files are sourced automatically. See table in Helper code and files for your testthat tests - R-hub blog

Ah I see removing the source() had given you errors. What errors?

Some of the errors are to do with resetting the random number generator! Which is good because it means that we were missing out on errors that arose due to the RNG that was set in helpers.R.

However now one error that appears when example is in test_truncated.R - previously this error didn't appear...but I think I need to debug these functions perhaps. Although they run fine interactively...

โ”€โ”€ Error (test_truncated.R:144:3): truncated inverse gamma has correct densities โ”€โ”€
Error in `get(paste("p", spec, sep = ""), mode = "function")`: object 'pinvgamma' of mode 'function' was not found
Backtrace:
 1. compare_truncated_distribution(...) test_truncated.R:144:2
 4. truncdist::qtrunc(u, spec, a = a, b = b, ...)
 5. base::get(paste("p", spec, sep = ""), mode = "function")

โ”€โ”€ Error (test_truncated.R:273:3): truncated pareto has correct densities โ”€โ”€โ”€โ”€โ”€โ”€
Error in `get(paste("p", spec, sep = ""), mode = "function")`: object 'ppreto' of mode 'function' was not found
Backtrace:
 1. compare_truncated_distribution(...) test_truncated.R:273:2
 4. truncdist::qtrunc(u, spec, a = a, b = b, ...)
 5. base::get(paste("p", spec, sep = ""), mode = "function")

โ”€โ”€ Error (test_truncated.R:321:3): truncated student has correct densities โ”€โ”€โ”€โ”€โ”€
Error in `get(paste("p", spec, sep = ""), mode = "function")`: object 'pstudent' of mode 'function' was not found
Backtrace:
 1. compare_truncated_distribution(...) test_truncated.R:321:2
 4. truncdist::qtrunc(u, spec, a = a, b = b, ...)
 5. base::get(paste("p", spec, sep = ""), mode = "function")

โ”€โ”€ Error (test_truncated.R:374:3): truncated laplace has correct densities โ”€โ”€โ”€โ”€โ”€
Error in `get(paste("p", spec, sep = ""), mode = "function")`: object 'plaplace' of mode 'function' was not found
Backtrace:
 1. compare_truncated_distribution(...) test_truncated.R:374:2
 4. truncdist::qtrunc(u, spec, a = a, b = b, ...)
 5. base::get(paste("p", spec, sep = ""), mode = "function")

However I think this might be a way that

One of the things I'm finding really hard to understand is how I am getting errors locally, but all tests pass on GH actions:

OK a small update.

GitHub Actions passes fine:

https://github.com/greta-dev/greta/runs/5374251749?check_suite_focus=true

however locally there is still a difference between test() and check().

It gets weirder. If I run devtools::test_active_file() on test_functions.R after restarting R, I get 13 fails.
However, if I run it again immediately after, it get 2 fails.

The R package used to have many source("helpers.R") calls. I have removed all but one of these. It has not made a difference in the output of devtools::check().

To help debug this, I'm trying to save the output of test() compared to check() so that I can compare the results.

I've found that

library(testthat)
options(testthat.output_file = "greta-test-out.xml")
test_dir("tests/")

Produces an XML file, but I've got no nice way to read/parse this, which is annoying, as there is some unicode / cli stuff that clutter and confuse the output, e.g.,

[ [38;5;214mFAIL[39m 6 | WARN 0 | [34mSKIP[39m 10 | PASS 1606 ]

As a result I don't know how many things failed or were skipped. I thiiink it translates to:

FAIL 6 | WARN 0 | SKIP 10 | PASS 1606

But I am not 100% sure.

I tried

options(testthat.output_file = "greta-test-out.txt")
test_dir("tests/", reporter = "minimal")

However the file, greta-test-out.txt contained nothing. So I guess it needs to be XML?

Of course, I can run devtools::build() and then R CMD check --as-cran greta_0.4.0.tar.gz

However, I don't get the same results in R CMD check --as-cran.

Ideally I'd like to save the output of devtools::check() but I cannot seem to find a way to store or save that output - there is one SO post I found but this doesn't really help since it uses R CMD check, and not devtools::check()

Those are indeed ANSI color codes. You can disable them by setting the envvar "R_CLI_NUM_COLORS" to 1.

devtools::check and the CI both use rcmdcheck::rcmdcheck which in turn runs R CMD check as a new process, so the difference must be in your local setup.
Do you have anything in your R-profile file that might influence things?
The different output on sequential runs of test_active_file also hints at something in the env influencing things... testthat actually does run the tests in the same session in contrast to rcmdcheck.

Does this happen in a new session? Is order a factor? Try running check first followed by test and vise versa each in a fresh session.

What are the errors you are getting?

2 Likes

OK.

So.

Thanks to @maelle for suggesting the following current solution, which is to remove all uses of mockery that refer to python parts.

Specifically, this commit: Address `check()` vs `test()` failure modes by njtierney ยท Pull Request #501 ยท greta-dev/greta ยท GitHub

it turns out that some of these mocked functions don't properly reset or something...which make greta think it doesn't have python...which causes 248 tests to fail because Python wasn't found.

I'm not sure why mockery is doing this? But it seems that it could be related to

At the moment I'm inclined to leave these tests out for the time being. Hopefully we can work out a way to resolve issues in mockery.

Thank you everyone for your help and suggestions!

1 Like

This topic was automatically closed after 45 days. New replies are no longer allowed.


If you have a query related to it or one of the replies, start a new topic and refer back with a link.