I would add that you will get a more complete picture if you calculate every possible assignment and their associated costs for each random sample. Depending on the exact nature of the random inputs (and unlikely with a normal distribution, but maybe possible) you could have a certain assignment that has a plurality of optimal times, but many of the non-optimal times are much higher than a second assignment that is consistently second place.
I'm tempted to write such a simulation myself, as it wouldn't be a ton of work and I need more tidyverse-style simulations under my belt, but something from my old math stats class is bugging me and making me think that I'm missing a numerical solution.
Edit: Oh, right. As I suspected, it took about 3 lines of code before I figured out what I was thinking. Assuming normal distributions for each task as stated, the mean cost for a given agent assignment will just be the sum of the means, and the standard deviation of the cost will be the square root of the sum of the square standard deviations (variances).
If the goal is to minimize long-term average cost, the standard deviations don't matter; stick with the mean. If you want a different loss function (maybe you want to minimize the chance of any given run having a cost below 30), then you would calculate the corresponding probabilities from the combined normal for each assignment option.
I may still show how I would come up with a simulation anyway if I get some time, though.