So not sure if this is the best channel to post this type of question, but it feels too halfbaked to post on stackoverflow or cross validated.
At the end of the day, I am in search of resources (tutorials) to move from deterministic operations research / optimization models to stochastic ones. Admittedly, not my strong suit, so I could be abusing terminology.
Consider the following assignment (agenttotask) problem:
 Three (3) tasks need to be completed.
 Four (4) agents [workers] to assign to those tasks.
 Each agent can only be assigned to one task, but you do not need to utilize every agent (obviously because there are more agents than tasks).
 Minimize (or maximize) the assignment costs.
Agent  Task 1  Task 2  Task 3 

A  11  14  6 
B  8  10  11 
C  9  12  7 
D  10  11  7 
Which we could solve (minimize) via the following binary assignments:
Agent  Task 1  Task 2  Task 3 

A  0  0  1 
B  1  0  0 
C  0  0  0 
D  0  1  0 
...yielding a total cost of 25.
...but what happens if instead of plainvanilla deterministic costs, we consider variability. Say each cost is normallydistributed: N(mu, sigma).
Agent  Task 1  Task 2  Task 3 

A  (11, 2.0)  (14, 1.0)  (6, 1.5) 
B  (8, 3.0)  (10, 2.5)  (11, 1.0) 
C  (9, 1.5)  (12, 0.5)  (7, 0.5) 
D  (10, 1.0)  (11, 2.5)  (7, 1.5) 
Questions

How does the approach change? Do we bootstrap here? Simulate?

How do the assignments change?

If we assume Gaussian distributions, are there any tricks or shortcuts?

How would I code this in R? I've found a few starting points (taskviews, random books/postings, but nothing really "complete" or the appropriate beginner level).

Packages? Vignettes?