Firstly, try to make your data as small as possible so as to conserve memory. If you're going to call unique, do it beforehand. If you don't need any columns, drop them.
All that done, joins can still occasionally blow up the size of your data. One thing to consider is what's happening with any NA values. If the columns by which you are joining have NAs, by default R will do a Cartesian (or "cross") join, which multiplies those rows:
merge(data.frame(x = c(1, NA, NA)),
data.frame(x = c(1, NA, NA, NA),
y = c('a', 'b', 'c', 'd')))
#> x y
#> 1 1 a
#> 2 NA b
#> 3 NA c
#> 4 NA d
#> 5 NA b
#> 6 NA c
#> 7 NA d
If you've got a few hundred or thousand NAs, that can lead to your data getting very big very fast. You can drop NAs in the join by setting the incomparables parameter:
merge(data.frame(x = c(1, NA, NA)),
data.frame(x = c(1, NA, NA, NA),
y = c('a', 'b', 'c', 'd')),
incomparables = NA)
#> x y
#> 1 1 a
The same size explosion can happen if you have lots of the same level in both with otherwise different rows, but NAs are a common culprit.
If you are still running out of memory, there is not one solution. Sometimes it requires rethinking what you're trying to do. Sometimes it requires scaling to bigger hardware or SQL. Sometimes data.table can be used to maximize speed and minimize transactional memory usage. Which is appropriate varies by context, but for your sanity, attempt the simple solutions first.