We are using R to support both APIs and web apps (via Shiny) as commercial applications. The primary components of these are a few functions applying and analysing monte carlo simulations (we're projecting the uncertainty around pension pots).
A question has arisen over the approach we should take for testing the implementation of the algorithms in these functions (mainly because R is a new language for us to use in production but it gives an opportunity to challenge existing conventions).
Specifically, the question relates to how we give comfort that there has been an appropriate independent validation of the implementation of the algorithm for use in production.
Broadly, the two competing views are:
The developer implements the algorithm and adds unit tests (using testthat) for edge cases (because the algorithm is typically complex, it's not possible to come up with non-edge case outputs without re-implementing the algorithm). An independent developer then reviews the implementation and test coverage by reading through the code. The main implementation is used to create a 'regression pack' of outputs which would be ran on a continuous integration basis (again using testthat), ensuring that future developments only result in differences where expected.
As 1 but a separate implementation of the algorithm is carried out by an independent developer and the outputs from that implementation are used to test the main code.
The concern with 1 is that having a third party read and review the code (whether original algorithm and/or tests) doesn't give sufficient comfort in the validation of the algorithm.
The concern with 2 is that a separate implementation of the algorithm (potentially for the 3rd time depending on the detail of the unit tests) is wasteful.
Was wondering if any others using R in production had an approach that they used around ensuring confidence in the implementation of algorithms?