Thanks for the suggestion: my report is indeed parametric, even if not to such an advanced level as shown in your link (which I will definitely study in the next days).
Having said that, I don't think that's a viable solution for me. There are consistent differences among the data sets:
- the column names are different
- the "structure" of the "missingness" is different (sometimes very few data are missing at all: some other times entire columns are missing, indicative of a failed sensor)
- even the physical sense of the variables can be different.
- In some cases I need to look at all the variables, in other I don't.
Thus, some level of manual editing of the report is in my opinion unavoidable. It's true than in most cases I need to perform an EDA and a survival analysis, but I don't think I can easily parametrize that: depending on the specific data set, I may be content with a Weibull model, or I may need a Cox proportional hazard models, or something even more complicated.
I should at the very least invest a lot of time (which I don't have right now) in studying tidyeval, and in writing scripts which are very flexible in terms of number of variables involved, column names, preprocessing and modeling steps to apply...I don't believe in "automatic Data Science": I think some manual intervention is needed. Or maybe it could be possible, but that would require building a Data Science platform: it's not something I can do on my own with an R Markdown report.
But this is just my personal opinion, and I'm sure that for more standardized tasks (like for example performing always the same kind of analysis on similar dataset which are collected weekly) people can be far more productive using parametrized reports.