I find this quite curious as well!
Whatever circumstances this would happen in, I would consider a bug—it's hard to diagnose without a reprex, though.
This may be a bug in the model_stack
print method—the print method may say there are no members when there indeed are. Is the resulting object with "0 members" still able to return predictions after fit_members
? Are there entries in it's model_stack["member_fits"]
slot?
Some other thoughts...
I would then look at the data stack being inputted to blend_predictions()
. How many columns does it have? Does the first column contain the true assessment set values, and do the entries in remaining columns look like plausible predictions from the candidate models?
If so, I would then look to the penalty
argument in blend_predictions()
. What happens if you set this value to 0? This should prevent the meta-learner from discarding any candidates.