I don't know anything about sensitivity analysis, and how a simulation can help you. Here are some thoughts on the simulation part.

One thing is not clear to me: what does one row of your dataset represent? Is it one individual? Then, each individual response is a Bernoulli event, and only the sum of all these follows a binomial distribution.

To take the classic example of a coin flipping, let's say each individual flips a coin, with a probability 0.32 of heads (it's an unfair coin) and 1-0.32 = 0.68 of tails.

In R we can simulate that with `rbinom(1,1, 0.32)`

.

```
rbinom(1,1,.32)
#> [1] 0
```

But what if you have 10 individuals who flip a coin?

```
replicate(n = 10, rbinom(1,1,.32))
#> [1] 0 0 0 1 0 0 1 0 1 1
```

Since it's slightly painful to write `replicate`

every time, R has a shortcut: the first argument to `rbinom()`

is the number of individuals:

```
rbinom(10, 1, .32)
#> [1] 1 1 0 1 1 0 0 0 0 1
```

But now, what if there is a single individual who flips the coin several time? The first time, they get either `0`

(heads) or `1`

(tails). Flipping twice, they can get 00 or 01 or 10 or 11. So the total number of tails they got is 0, 1, or 2. That can be described with the binomial distribution for `n=2`

coin flips, with probability `p=0.32`

. In R:

```
rbinom(1, 2, .32)
#> [1] 0
```

### Conclusion

So, if you want to simulate one yes/no choice per individual (per row of the data frame), the correct way is:

```
N_ind <- nrow(mydata) # nb individuals
n <- 1 # nb choices
p1 <- 0.32 # proba of each choice
mydata$choice <- rbinom(N_ind, n, p1)
```

Of course, if the outcome for one individual is the result of several yes/no decisions, you will have to change `n`

.

### Aggregation

A pointer for the next step, with `dplyr`

you can do something like that:

```
library(dplyr)
mydata %>%
group_by(county) %>%
summarize(nb_positive_choices = sum(choice),
prop_positive_choices = mean(choice))
```