(author of future here) First of all, for anyone trying to reproduce this, @natekratzer is on Windows (assuming your master R session and your workers are all Windows), meaning that:
plan(multiprocess)
is equivalent to:
plan(multisession)
If you're on Linux or macOS, that would become plan(multicore), which is a different creature - it uses forked processing rather than background R session for parallelization, which has different (better or worse) memory properties (I'm constantly debating with myself whether it was a good thing to provide the "virtual" plan(multiprocess)).
To narrow down on the actually problem. The error
error in unserialize(node$con) error reading from connection
originates from parallel:::recvData.SOCKnode() which in turn is called by parallell::recvResult(), which is the framework used by plan(multisession). This recvData() function is called when the master R process tries to retrieve results back from the workers. Therefore, the error suggests that the results were not sent at all, or only partially sent. This in turn suggests that the worker's R process has terminated (*).
Looking at the code for these workers, which is simply parallel:::slaveLoop(), I don't see other reasons for the worker terminating other than
- running out of memory,
- the process being killed by external signals, or
- running corrupt code causing R to core dump.
A core dump is unlikely and should equally well happen when running sequentially. Also, core dumps are likely due to execution of really "bad" code (memory leaks in native code or similar).
So, as others already suggested, I would put my bet on an "out-of-memory" problem. Other than externally monitoring the memory consumption of these workers, I don't think there's an easy way to know whether it's a out-of-memory error or not.
On the future roadmap, I'm hoping to add (optional) automatic timing and memory benchmarking of futures being evaluated (https://github.com/HenrikBengtsson/future/issues/59). That could have helped here, e.g. by observing how much memory successful futures consume with one, two, three,... workers until the error kicks in.
Footnote: (*) It should be possible to add a post-mortem diagnostic to the above error and test whether the (local) worker is alive or not. If so, I think one could provide a more informative error message that also include something like "It looks like the underlying R process (PID ...) of the SOCKnode worker has terminated". I'll add it to the todo list.