For loop(or if someone has a better idea) on large dataset

That will be a non-duplicate Martin, i.e. not me.

Without an example of your data it's really not possible to judge what is going on. I doubt there is a bug, but you are using allow.cartesian=TRUE, so the size of the result can explode due to duplicates. It's possible this is not what you intend the behaviour to be.

No, it's intended. We looked at this some more and when not using the nomatch = 0, the entire system hangs. We checked the resource monitors and the CPU for the lack of a better term "locks." Adding nomatch = 0, we process 950million rows within 3 hours.

Thanks for all the help everyone!